In the realms of YouTube and the blogosphere, the generative AI tools of Photoshop are being hailed as revolutionary. I’ve witnessed countless videos of Photoshop gurus performing incredible image transformations with the mere click of a button or with a text prompt. Intrigued yet a tad apprehensive, I downloaded and opened the Beta version, and loaded an image to test these new features, and braced myself to be amazed. Could these new features live up to the hype, and will they change my post-processing workflow?

Deciding to channel my inner Kodak, I bravely pressed the button and left the rest to AI. As I pondered the wonders of the technological evolution of photography, I couldn’t help but review in my mind’s eye the significant milestones as we progressed from daguerreotypes to camera-less computer-generated photographs. At least I, being on the cusp of becoming a senior citizen, will no longer be taking pictures by the time intrusive AI is built into cameras and will be grateful to be spared the humiliation that my camera’s AI is much more intelligent than me, telling me “Camera Says No” to my mere mortal attempts to compose a photo, the flip side ultimately being that Photoshop will probably utter these words much sooner.

Now, back to reality! Photoshop’s latest AI tools are absolutely incredible. The generative expand and fill features allow you to effortlessly expand an image while seamlessly adding visual elements that blend flawlessly with the original image. These tools are a significant time-saver. And that’s not all – generative fill can even be used to add entirely new and unrelated items to your images, both organic and non-organic. Although I encountered some difficulty adding photorealistic animals and people, this is only the Beta phase, so I expect it will improve. Also, the resolution restrictions that cause the new generative additions to appear a bit blurry will no doubt also be enhanced. In conclusion, these tools are definitely going to be a game-changer in my future workflow.

Adobe’s latest AI tools, paired with a range of existing features in Photoshop, make it an impressive photo editor. However, in my opinion, platforms similar to” Midjourney” outperform when it comes to creating AI-generated images. Despite also possessing image resolution issues, Midjourney v5.2 has the ability to produce truly outstanding digital images that look incredibly realistic. Imagine if we could combine advanced Photoshop editing with Midjourney’s AI image generation. Would this mean I could retire all my cameras and simply create images on my Mac? Maybe not, as AI does have its drawbacks. It can be stubborn or uncooperative and unpredictable, often taking artistic liberties when generating content, resulting in hyper-realistic images that may not exactly match what you initially “prompted for”. But hey, that’s just part of the AI magic!

I’ve come across several blog posts praising the benefits of and the advancements in AI. The idea is that these technologies can free photographers from the technical aspects of their work, allowing them to focus on the more creative side. While this may hold true for experienced photographers who already grasp the technicalities, I believe it might be counterproductive for novice photographers who heavily rely on camera and computer automation. By depending too heavily on AI, they may not fully develop the vital skills and foundations of the craft, which could limit their ability to capture unique photos in situations where AI struggles. This dependence on automation could potentially impede creative growth. This brings me back to a phrase I wrote earlier, “The Camera Says No!” So perhaps we should not tie our hands and creative minds by making the machines the ultimate decision-makers.

Innovation has always been at the core of photography’s evolution, and the advent of AI is just the next phase in its development. As technology progresses, we can expect both opportunities and challenges in how we approach our work. Yet, I firmly believe that the skill of the photographer will always hold greater value than the machine’s automation. Photography has always relied on the unique perspective and creativity of the human behind the camera to interpret and capture the world around us. Bearing this in mind, I approach this new phase of image-making with caution, recognising the need to preserve the artistry and the personal touch that defines our medium.

Let me briefly discuss the images accompanying this post. The Churchyard of St Mary’s, situated on Whitby’s East Cliff, is a fascinating place with weathered tombstones and a gothic mood. This location served as inspiration for Bram Stoker’s iconic novel Dracula. Unfortunately, the unpredictable weather hindered my attempts to capture the atmosphere in its true form. To overcome this obstacle, I employed some Photoshop trickery to transform the original photographs into the digitally treated images you see here.

Although I initially intended to utilise Photoshop’s new tools, I used generative fill sparingly due to issues with the low-resolution generated additions. For the first image, which showcases the tombstones and the ruins of Whitby Abbey, I made the following adjustments:

I replaced the sky with an image from Adobe stock and used generative expand to increase the picture height. Instead of using the sky replacement tool, I used the pen to create the image mask. Additionally, custom cloud brushes were employed to add extra clouds.

In my workflow, I extensively use adjustment layers to modify the tonality and colour treatments. I also use additional layers and custom brushes to paint in the atmosphere. Whenever possible, I prefer non-destructive editing using layers and smart objects.

I initially attempted to use AI to add a raven to the image, but unfortunately, the AI-generated raven appeared fake and distorted. As a result, I opted to incorporate an image from Adobe stock that I cut out and inserted. In hindsight, the image could have done without the raven altogether!

The second image featured the tombstones and the Anglican Cross and underwent the following processes:

I replaced the sky with two images from Adobe stock and used generative expand to increase the picture height. Again, the pen tool was used to create the sky image mask. Custom cloud brushes were utilised to paint in the mist.

Generative expand was applied to the tombstones to increase the picture’s width. While it did a decent job, the low-resolution additions posed a challenge. While the image degradation may not be noticeable in the low-output format used for this blog, it would become apparent if the image were printed.

Similar to the first image, various adjustment layers were employed, including curves, levels, colour balance, and hue/saturation, to alter the tonality and colour of the base photograph.

In conclusion, I made limited use of the new tools in treating these images, due to it still being in the Beta phase. However, as technology advances, they will undoubtedly become valuable tools and welcome additions to the Photoshop toolbox.

Now, for a bit of fun

Thanks to Midjourney, an AI program I stumbled upon recently, I’ve got some images to share with you. While I’m still new to this technology, I can’t help but be amazed by the results of my second attempt at AI-generated images. They may not be photos, but the process of creating them was a lot of fun. While I can’t claim full credit for these images, as the machine guided me just as much as I guided it, I like to think of it as a collaboration between human and AI. However, I have a feeling the machine might have a different opinion!

Follow me on