In The New Yorker, Kyle Chayka asks, simply: “Have iPhone Cameras become too Smart?”
For a large portion of the population, “smartphone” has become synonymous with “camera,” but the truth is that iPhones are no longer cameras in the traditional sense. Instead, they are devices at the vanguard of “computational photography,” a term that describes imagery formed from digital data and processing as much as from optical information. Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal.
A few years ago, after modern phones had reached feature parity, manufacturers began differentiating them along camera technology. And since phone cameras are limited by the device’s form factor, it made sense to compensate for their shortcomings by relying on the powerful chips in the phones.
This has created the strange moment we find ourselves in, where the success of the world’s most popular camera—the modern-day Kodak Brownie—is based not so much on optics as on algorithms.1 And as with any vanguard technology, the outcomes are predictably weird: Chayka discovers that modern iPhones liberally use HDR effects to blend foreground and background, making the images appear “over-real”.
One expects a person’s face in front of a sunlit window to appear darkened, for instance, since a traditional camera lens, like the human eye, can only let light in through a single aperture size in a given instant. But on my iPhone 12 Pro even a backlit face appears strangely illuminated. The editing might make for a theoretically improved photo—it’s nice to see faces—yet the effect is creepy. When I press the shutter button to take a picture, the image in the frame often appears for an instant as it did to my naked eye. Then it clarifies and brightens into something unrecognizable, and there’s no way of reversing the process.
I find that a lot of these debates hinge around the central questions of what makes an image real: What is a realistic rendition of color, of darkness, of light? Or in other words: When does automatic processing jump the shark? Chayka suggests that the answer can be determined by how a picture makes us feel: There’s too much background processing when the images become uncanny, and I have to agree—my own smartphone camera takes pictures that are over the top and tend to render people like wax figurines.
Of course, the more uncanny digital photography has become, the more I’ve seen (and been lured by) the resurgence of film photography. That “analogue look” has seemed more authentic—even though the film itself is an interpretation of reality rendered in the chemical formulations by film manufacturers. And whenever I read about the lives of historic photographers, I’m taken aback by how much the look of their pictures was determined by the post-processing they did in the lab.
Real, it strikes me, isn’t so much about analogue v. digital as it is about ensuring control: About cameras remaining tools rather than devices, where a semblance of personality is retained not just by composition but also by agency.
Sebastiaan de With: “(…) it is becoming increasingly important to define what we refer to when we talk about a ‘camera’. If I talk about the camera you’re holding, I could be talking about the physical hardware — the lens, the sensor, and its basic operating software in the case of a digital camera — or I could be talking about the package. The hardware with its advanced, image merging, hyper-processing software.“↩︎