Over the past few years, the connection between AI and photography has slowly become more integrated. AI is utilised in diverse ways to augment the capabilities of modern cameras, effectively aiding photographers in producing superior images. While some photographers revel in the transformative power of AI, others remain hesitant.
As is its nature, AI continues to evolve and improve day by day, making it harder to argue against the fact that it makes nearly every application to which it’s applied incredibly efficient. With the recent infiltration of AI in almost every platform and online tool, it’s something we can no longer ignore. Google’s BARD and Magenta, Midjourney, Open Ai’s ChatGPT and Dall-E and Photoshop’s Firefly are all ground-breaking AI applications that have completely changed the way we work and create. For some, this spells the beginning of the end; for others, this opens up a world of possibilities. We take a look at what AI means for the future of photography.
It All Started With Auto Mode
Many have argued that auto mode takes away the artistry and creativity of photographers. By taking away the freedom to manipulate and control settings to create beautiful and unique images, what is the point of being a photographer, right? Wrong! Auto mode was designed as an excellent way for beginner and enthusiast photographers to figure out their way around a camera without having to do the math of syncing their F-stops, shutter speeds and exposure values. Because being a photographer is more than just toggling with some dials; it involves looking at things from a different perspective, capturing moments that tell compelling stories, revealing the beauty of everyday life, and more. While auto mode is a fantastic stepping stone, it is not the death of all creativity. People who argue that auto mode is a cop-out fail to recognise how much AI has infiltrated modern cameras.
The AI Influence
All cameras released in the last ten years feature some kind of AI tool beyond auto mode. They include subject recognition, auto-focus, auto-framing, facial recognition, auto-red eye adjustments and more. The same photographers who are critical of auto mode are the same ones using all these tools. That’s because AI tools are designed to make your photographing experience easier and more efficient without taking away your creative freedom.
This is especially true for those who shoot on their smartphone. For years computational photography has been a key feature of smartphones. It all started with iPhone’s Portrait mode, which uses a number of algorithms to achieve a shallow depth-of-field effect, commonly known as “bokeh. To achieve this bokeh effect on a traditional camera, one would have to use a wide-angle lens, know the distance needed between the camera and the subject, know the correct aperture settings to achieve the look and understand the composition of the shot. All it takes to achieve bokeh on an iPhone is a simple swipe.
So one can understand the frustration of photographers who have invested the time and effort to learn different photographic skills only for AI to swoop in and make it as easy as swiping a smartphone. The “Deep Fusion” feature in modern iPhones takes multiple photographs and combines the clearest parts of each frame together to create one composite image. Each image is also analysed with a graphics-processing unit which then recognises subjects and exposes them accordingly.
With new smartphones coming out every week with software capable of mimicking the effects of nearly every traditional camera, the reason for the outcry against AI starts to make sense. To make matters worse, Samsung recently got caught faking their “Space Zoom” capabilities on the Samsung S23 Ultra. For years, Samsung bragged that the cameras on some of their Samsung Galaxy models could take clear and detailed images of the moon. Many were hesitant to believe that a smartphone camera could achieve such a feat especially given the fact that to take a pretty decent picture of the moon in the traditional sense requires some pretty hefty gear. One Reddit user, u/ibreakphotos, took it upon themselves to test the accuracy of the Samsung “Space Zoom”, and as you can guess, the results were a little shaky.
What occured during the test was that the photographer took an intentionally blurry image of the moon, by pointing his smartphone camera towards the moon, zoom as much as possible and capture the shot. The initial image captured was that of a blurry moon, but a split second later, the image sort of “corrected” to a much clearer, more detailed image of the moon. In 2021, Samsung offered some insight into the unbelievable features of the smartphone cameras to Input Mag “no image overlaying or texture effects are applied when taking a photo” however, their smartphone camera uses a “detail improvement engine function” to “effectively remove noise and blur”. Essentially, what they’re offering users are AI-improved computational cameras.
The AI Revolution
In 2022, Open AI, the American artificial intelligence lab, launched the industry-revolutionising tool, Chat-GPT, an artificial intelligence chatbot. Chat-GPT spread like wildfire thanks to its potential to completely change the way people work and create. The AI tool can assist with research, content creation, coding assistance, translation, productivity, organisation, and so much more. A few months prior to the release of Chat GPT, Open AI also released Dall-E, an AI system capable of creating realistic images and art from word prompts. Google’s Magenta, similar to Dall-E, had already been released in 2016. While there were some trickles of excitement around the time it was released, it would appear that Open AI’s Chat GPT and Dall-E enlightened the mainstream media and the world to the capabilities of AI. Both Dall-E and Magenta are capable of producing AI-generated images from written prompts. However, as things currently stand, the images both these platforms generate are pretty great, but much room remains for improvement. In March 2023, Adobe threw its hat in the AI ring and announced the launch of Adobe Firefly. Adobe Firefly is a new generative AI tool that allows you to generate high-quality images, fill in existing images, and remove objects using text prompts. Firefly is set to completely revolutionise the way photographers and graphic designers create and edit their work. The tool also excels in video editing, making it an outstanding asset for any creative individual.
However, many of the images generated by these platforms have a sort of animated look to them. These images can appear pretty realistic; however, they don’t look quite natural. But as is its nature, AI is continuously evolving and improving, and it won’t be long before these generative-AI tools can perfectly generate photo-realistic images…if they haven’t already.
AI in Real Time
There have already been some moments in pop culture where we’ve had to do a sort of double take on figuring out what’s real and what isn’t. One specific moment was when an image appeared on social media depicting Pope Francis in a white Balenciaga puffer coat. The image looked pretty believable; however, some keen-eyed sleuths were able to pick up some minor irregularities in the image. The image was generated on Midjourney and posted on Reddit by u/trippy_art_special, where it quickly spread across social media platforms. While the image was fairly harmless, many realised the power that AI holds in manipulating images and spreading disinformation.
Earlier this year, German photographer Boris Eldagsen won the Creative category of the Sony World Photography Award. However, the artist refused the prize, admitting that he submitted an AI-generated image with the aim of testing whether photography competitions were prepared for the influence of AI in photographic competitions. The winning photograph depicted two women from different generations in black and white.
“We, the photo world, need an open discussion… A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake? With my refusal of the award, I hope to speed up this debate,” said Eldagsen. He added that his winning image was a “historic moment” as it represented the first time an AI image won a prestigious international photography competition. “How many of you knew or suspected that it was AI generated? Something about this doesn’t feel right, does it?.. AI images and photography should not compete with each other in an award like this. They are different entities. AI is not photography. Therefore I will not accept the award,” said Eldagsen.
According to a representative from the World Photography Organisation, Eldagsen had informed them about his use of AI in the “co-creation” of the image prior to being announced as the winner.
“In our correspondence, he explained how following ‘two decades of photography, my artistic focus has shifted more to exploring creative possibilities of AI generators’ and further emphasising the image heavily relies on his ‘wealth of photographic knowledge’. The creative category of the open competition welcomes various experimental approaches to image making, from cyanotypes and rayographs to cutting-edge digital practices. As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation…Additionally, we were looking forward to engaging in a more in-depth discussion on this topic and welcomed Boris’ wish for dialogue by preparing questions for a dedicated Q&A with him for our website…We recognise the importance of this subject and its impact on image-making today… While elements of AI practices are relevant in artistic contexts of image-making, the awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.”
The Future
It’s fair to understand the fear of AI and its capabilities. However, the fear of new technology is something that we’ve seen multiple times in the past before. It occurred with the printing press and power loom during the industrial revolution, computerphobia in the 1980s and now with AI. Films like I, Robot, Her, Megan, Ex Machina, and more have all shown people the worst things that can happen when AI evolves a little too much. While robots taking over may seem like a far-fetched fear for now, a current, very real fear is the loss of jobs. During the industrial revolution, thousands lost their jobs, and the same occurred in the 1990s and early 2000s with the automation of industries. And while this is a bigger conversation for another time, it appears AI in photography is here to stay. As Boris Eldagsen proved with his photo submission, we still have a long way to go to figure out AI’s place in photography. Copyright laws need to be ironed out. Photographic competitions need to be redesigned. Authenticity checkers and AI detectors need to be designed and implemented. The way we engage with photography and the images we produce will change over the next couple of years as advancements in AI and the industry as a whole continue to progress. It will entirely reshape the field and redefine our creative possibilities.