The Sony Alpha A7C II isn’t a large camera—the “C” in the name stands for “compact”—and it’s often paired with thin, inconspicuous lenses to maintain mobility. But despite its diminutive size, I’m always a bit self-conscious when carrying it out and about; it’s so rare to see photos taken with anything other than smartphones, that people are especially conscientious of being at the receiving end of said lenses.
For special events and occasions, I still prefer the dedicated, big (relative to the phone) camera, its enormous sensor and lens construction granting the setup insurmountable advantages rooted in the physics of light. As phone camera hardware has slowed down a bit in recent years, many recent advances have been in software, manipulating pixels before, during, and after the shutter to make photos look good. By contrast, cameras have been more straightforward in their advancement: more megapixels, faster shutter speeds, better focus, and lighter + faster lenses with less distortion. The Sony cameras, in particular, are known for their ability to track focus and have incorporated image recognition AI in recent camera bodies to encourage enthusiast upgrades.
For all the hardware and software improvements made available in the past ~5 years on both sides, there’s still a demarcation between camera manufacturers and smartphone vendors. The former is focused on better ways to capture light onto its image sensors; if there are post-capture tweaks to be made, professional photographers have already honed their post-processing workflows via the likes of Lightroom and Photoshop. The latter is adding increasingly powerful AI into its camera and photo software; whereas last year it was possible to remove unwanted distractions from pictures, this year phones are capable of adding realistic AI-generated objects. In fact, one of the hardest things to do—and a reliable tell for Photoshop-ery—is getting the shading of a foreign object inserted into a scene to match its existing lighting and color balance. AI-driven tools that operate at the pixel level render this difficult task almost trivial.
The Verge has been at the forefront of exploring what it means to just add and remove portions of photographs at will after the fact. If photos are no longer a representative capture of what had transpired in reality, what are they? A recent article gathered responses from the major phone makers, and it’s worth quoting the responses in full:
Samsung:
Actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene — is it real? Or is it all filters? There is no real picture, full stop.
Google:
“It’s about what you’re remembering,” he says. “When you define a memory as that there is a fallibility to it: You could have a true and perfect representation of a moment that felt completely fake and completely wrong. What some of these edits do is help you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.”
Apple:
Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened.
Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated.
Feel free to roll your eyes a bit at these responses; of course the PR from marketing paints the advent of AI features as a boon to photographers. The duplicity here, though, is widening the aperture of what constitutes a photo by applying an orthogonal boundary, that of “memory, ” which itself is fuzzily defined and arbitrarily wide in permissibility.
If it’s just a matter of representing and triggering memories, humans have invented plenty of ways to accomplish the task, some well before the creation of the photograph. Drawings and paintings represent and interpret the real world—some mimic reality, and others are deliberately abstract1 to layer on additional complexity and emotion. Since the vast majority of paintings don’t look like how the world looks naturally to our eyes, there’s no mistaking this art as reality2. Studies in art history explore the historical accuracy of represented scenes, understanding that there’s an element of storytelling that need not be constrained by photons bouncing off of objects3.
It seems inevitable that photos will follow down the same path. Photorealistic will be a less powerful descriptor simply because we can no longer rely on photos as concrete representations of reality with nearly the same confidence levels. Pixel peeping for signs of manipulation will be less effective with better AI; the artifacts just won’t exist in the same manner as with imperfect lenses or Photoshopped imagery. For now anyway, the sheer volume of photos taken every year means that the vast majority of photos won’t be modified by AI. But, we’re a handful of iterations away from phones’ camera apps defaulting to more aggressive sub-image generation, rendered with every shot for the sake of simplifying operations for users: say, Google’s Magic Editor automatically combining and selecting the best group pictures, or Samsung’s lunar replacement pipeline applied to other commonly-photographed objects.
Now we just have to find something else to represent physical authenticity.
My mind goes immediately to the Cubist art movement.↩
One possible exception is the Photorealism movement, which tried to mimic how things looked in reality, almost at the microscopic level…by using photographs for reference.↩
The Arnolfini Portrait is a famous example of such a piece of art.↩