Marc Levoy, former distinguished engineer at Google, led the team that developed computational photography technologies for the Pixel phones, including HDR+, Portrait Mode, and Night Sight, and he’s responsible for a lot of that newfound focus on camera processing.
But since Google’s Pixel was introduced, there’s been a lot more interest in the camera’s software and how it takes advantage of the computer it’s attached to.
Levoy recently joined Verge editor-in-chief Nilay Patel for a conversation on The Vergecast after leaving Google and joining Adobe to work on a “universal camera app” for the company.
In the interview, Levoy talks about his move from Google to Adobe, the state of the smartphone camera, and the future of computational photography.
It’s also true that there is probably some ultimate limit on high dynamic range imaging — not necessarily on how high a dynamic range you could capture, but on how high a dynamic range you can effectively render without the image looking cartoony.
Is your vision for a universal app — and I recognize you’re a team of one building a team, many steps to come — that anybody will download it on any phone, and the image that comes out will look the same, no matter the phone? That remains to be seen.
If you want to put 96 megapixels and you can’t squeeze a larger sensor physically into the form factor of the phone, then you have to make the pixels smaller, and you end up close to the diffraction limit and those pixels end up worse. They are noisier.
That’s a different decision, and maybe that would be continued for the different smartphone vendors. Maybe not. It remains to be seen.
One part of the conversation specifically focuses on the balance of hardware and software for a camera in a smartphone and the artistic decisions made within the software.