April 13, 2023 #Photography #Computational Photography #Camera #Generative AI #TikTok #Bennet Miller #Fran Lebowitz #Susan Sontag
A few weeks ago, Samsung found itself in the middle of a minor online scandal. A redditor had discovered the secret to the company’s “space zoom” feature, which seemingly let people take impressive, handheld smartphone photos of the moon. It turned out that upon zooming, the phone’s software would recognize the moon in the sky and surreptitiously swap in an existing, high-resolution picture of it.1
Modern smartphones, of course, do all sorts of heavy background processing to enhance pictures, but this particular discovery hit a nerve. From what I saw, many people thought Samsung had crossed a line from mere optimization into editing, and they were uncomfortable with it.
John Gruber sums up the sentiment:
What Samsung is doing with photographs of the moon is fine as a photo editing feature. It is not, however, a camera feature. With computational photography there is no clear delineation between what’s part of the camera imaging pipeline and what’s a post-capture editing feature. There’s a gray zone, to be sure. But this moon shot feature is not in that gray zone. It’s post-capture photo editing, even if it happens automatically — closer to Photoshop than to photography.
Gruber calls the debate about what constitutes a photo the “existential question of the computation photography era”, and I would argue that AI makes a definitive answer even more elusive.
A different, but related uproar came up around the same time: TikTok hat launched the “Bold Glamour” filter, which uses real-time AI enhancements to make people look strikingly—and conventionally— attractive.
Is this level of editing desirable? Is it at all healthy? Body image issues aside, what kind of illusions are being created by these kinds of enhancements?
Both the Space Zoom and the Bold Glamour filter let people take pictures of something that isn’t technically in front of the lens. But when technological advancements mean that cameras no longer capture but generate, then what does it mean for photography overall, and for truth in general?
In Tuesday’s issue of Dirt, Terry Nguyen recounted her visit to a Bennett Miller show at Gagosian, which consists entirely of prints from photos generated by Dall-E. From the gallery website:
The striking results engage the history and format of photography to pose questions around the contingent and enigmatic nature of perception, reality, and truth—an enquiry made newly urgent by revolutionary innovations in computing.
Nguyen’s takeaway is far more skeptical:
The images aren’t photographs because, to borrow Susan Sontag’s definition, there is no real experience or event being captured. There are no stakes attached to the subjects. Instead, they are amorphous vessels for a vibe, a fictive manipulation of feeling. A photo partly derives its emotion from the complicity of the witness. But what we are witnessing is a machine’s visual interpretation of a textual prompt.
Just like the moon controversy, an exhibit generated by AI makes people uncomfortable—another line has been crossed. But where is that line?
Today’s issue of Dirt linked to an article in ARTnews by Shanti Escalante-De Mattei, who went to the same exhibit and ran into Fran Lebowitz:
“These are not real photographs, but what are real photographs?” Lebowitz begins. “Are the only real photographs the ones made on film, not the digital ones? My friend Peter Hujar would say so.”
Lebowitz, of course, has been around long enough to have witnessed the shift from analog to digital, and Escalante-De Mattei picks up on that idea immediately:
The slippery slope tack: if we’ve accepted that cameras do not make the photographs, but that photographers do, why should any succeeding technology that the human mind directs for its purpose not be judged similarly? That is, as a genuine, human act of creation. I ask Lebowitz a clumsy question, something like, ’Isn’t the labor of trying to make something worth something?” She says of course. What are we even talking about? It’s too basic but I can’t help it.
I don’t think we’ll have an answer to this fundamental question anytime soon, but I would argue that the way we feel about it seems like a good way to measure when the line is crossed. For me, personally, the transition from analog to digital photography immediately feel less dramatic than that from human-created photography to AI-authored images. Is marshaling any kind of technology still creation? I have my doubts.
As Marquees Brownlee explains in his great analysis of the feature, this works because when seen from the Earth, the moon always shows the same side.↩︎