Search

A Deep Look Into the iPhone's New Deep Fusion Feature - Gizmodo

Photo: Adam Clark Estes (Gizmodo)

This week, iPhone 11 owners are supposed to get a free upgrade to their cameras thanks to a beefed-up neural engine and “mad science.” It’s called Deep Fusion, and it’s designed to deliver incredibly detailed photos in especially challenging environments. I’ve spent weeks testing the beta version of the computational photography software on an iPhone 11 Pro against the old camera software on a separate iPhone 11 Pro. Truth is, Deep Fusion works—but only in the strangest scenarios.

The first thing you need to know about Deep Fusion is that Apple is very proud of it. The company devoted several minutes to a preview of the feature at its September event, where it touted Deep Fusion as “the first time a neural engine is responsible for generating the output image.” In practice, this involves the iPhone taking nine total photographs, and then the neural engine in the new ultra-powerful A13 Bionic chip essentially pulls out the best pixels in each image and reassembles a photo with more detail and less noise than you’d get from an iPhone without Deep Fusion.

Advertisement

Allow me to zoom in on that process a little more because it’s not quite as confusing as it sounds. What the iPhone camera with eight of those nine exposures is doing is similar to bracketing, the old school photography technique where you take the same shot with different settings. In this case, the iPhone camera captures four short-exposure frames and four standard-exposure frames before you hit the shutter button. (The iPhone camera starts capturing buffer frames whenever the camera app is open, just in case it needs them for a Deep Fusion or Smart HDR shot.) When you hit the shutter, the camera captures one long exposure that draws in additional detail.

Screenshot: Apple

All of these exposures quickly become two inputs for Deep Fusion. The first input is the short-exposure frame with the most detail. The second is what Apple calls a “synthetic long” which results from merging the standard-exposure shots with the long exposure. Both the short-exposure shot and the synthetic long get fed into the neural network which analyzes them on four different frequency bands, each one more detailed than the last. Noise reduction gets added to each image, and then finally, the two are fused together on a pixel-by-pixel basis. This whole process takes about a second, but the Camera app will queue up proxy images so you can keep shooting while the neural engine is humming along, Deep Fusioning all your photos.

If you’ve paid close attention to Apple’s computational photography features, this Deep Fusion situation might sound a lot like the Smart HDR feature that came out last year with the iPhone XS. In theory, it is similar, since the iPhone is constantly capturing these buffer images before the photo is taken to prevent shutter lag. In practice, however, Deep Fusion isn’t just pulling out the highlights and shadows of different exposures to capture more detail. It’s working on a hyper granular level to preserve details that individual frames might have lost.

Advertisement

Okay, so maybe all that is kind of complicated. When it comes to using the new iPhone with Deep Fusion, you don’t really need to think about how the magic happens, because the device activates it automatically. There are a few key things to know about when Deep Fusion does and doesn’t work. Deep Fusion does not work on the Ultra Wide camera. Deep Fusion only works on the Wide camera in low- to medium-light scenarios. Deep Fusion works almost all the time on the Telephoto camera, except in very bright light where it wouldn’t do much.

Photo: Adam Clark Estes (Gizmodo)
Advertisement

There’s one more scenario that will absolutely ensure Deep Fusion never works. If you’ve toggled on the new option under the COMPOSITION header in the Camera app settings that say “Photos Capture Outside the Frame,” then Deep Fusion will never work. So keep that option off if you want to try Deep Fusion.

Now that all of the nitty-gritty technical details are out of the way, let’s dig into what Deep Fusion’s computation photography mad science really feels like. If I’m being honest, it doesn’t feel like much. Right after the Deep Fusion feature appeared on the iOS 13.2 public beta, I installed the software on Gizmodo’s iPhone 11 Pro, while I kept the previous iOS version, the one without Deep Fusion on my iPhone 11 Pro. Then I just took a crapload of pictures in all kinds of different environments. Frankly, I often couldn’t tell the difference between the Deep Fusion shot and the non-Deep Fusion shot.

Advertisement

Take a look at these two photos of the clock in the middle of Grand Central Terminal, each taken with the telephoto camera on an iPhone 11 Pro. Can you tell which one was taken with Deep Fusion and which one was not? If you can understand the very basic symbols I’ve added to the bottom corner of each shot, you can probably guess. Otherwise, it’s going to take a lot of squinting. There is a difference. Look closely at the numbers on the clock. They’re much crisper in the Deep Fusion shot. The same goes for the ripples on the American flag and the nuanced texture of the stone pillars around it. You might not notice that the shot without Deep Fusion looks a little fuzzy in these areas, but then you see the Deep Fusion shot and realize that the details are indeed sharper.

Deep Fusion off (left), Deep Fusion on (right)
Photo: Adam Clark Estes (Gizmodo)
Advertisement

Subtle, right? But in this case, without zooming in, one can clearly see how the Deep Fusion version of the photo pops more and looks less noisy. Both photos also showcase the impressive performance of the iPhone 11 Pro in low light scenarios. The Main Concourse in Grand Central Terminal is a surprisingly dark place, especially at dusk when these photos were taken. Both look good, but the Deep Fusion one does look slightly better.

Now let’s look at a different example. Here’s a boring but detail-rich shot of a skyscrapers in Midtown Manhattan on a dark and rainy day. In this case, you really do need to zoom in to see some of the slight differences between the regular iPhone 11 Pro photo and the one that used Deep Fusion. They’re super similar. You’ll see a little less noise, and the reflections in the window are clearer in the image on the right. The major difference I can spot is on the white railing near the bottom of the frame. It looks almost smudged out in the non-Deep Fusion photo. And much like the numbers on the clock in the Grand Central photo, the white railing pops in the Deep Fusion one.

Advertisement
Photo: Adam Clark Estes (Gizmodo)

This squinting for differences exercise is where I found myself the entire time I tested my Deep Fusion-enabled camera against the one without it. Both cameras were impressive, and the one with Deep Fusion was occasionally a little bit more impressive in certain environments. Again, it only works in a low light environment for the Wide camera, and it’s usually working in photos taken with the Telephoto camera, unless it’s in a very bright scene.

Advertisement

Things changed for me when I started taking photos of fur, however. In theory, this is the exact sort of scenario where Deep Fusion should shine, since tiny strands of hair tend to blur together, but a neural engine could identify these details and merge them together into a Deep Fusion photo. This might be why Apple chose to use a bearded man in a finely textured sweater to show off Deep Fusion in the recent keynote. My version of a bearded man in a finely textured sweater is a little puppy named Peanut.

Deep Fusion off (left), Deep Fusion on (right)
Photo: Adam Clark Estes (Gizmodo)
Advertisement

Cute, right? Peanut weighs three pounds and is covered in the softest, finest fawn fur. Each little hair is slightly different in color, which almost makes it looks like she got highlights down at the local dog salon. While she looks angelic in both of these photos, it’s fairly easy to see that, in the photo on the left, her gentle little highlights get blurry around the crown of her head and around her ear. In the Deep Fusion photo on the right, they’re crisp as can be. Have a closer look:

Deep Fusion off (left), Deep Fusion on (right)
Photo: Adam Clark Estes (Gizmodo)
Advertisement

In this case, the photo that doesn’t have Deep Fusion powers almost looks out of focus in certain places. And the more you zoom in, the more pronounced the lack of Deep Fusion magic appears. Put another way, I never want to take another photo of Peanut without Deep Fusion again.

This brings me to a curious piece of the Deep Fusion puzzle. And I do think it’s a bit of a puzzle. Deep Fusion is a puzzle because the workflow is confusing, and in my tests, it was sometimes confounding to tell when the technology was working at all. It’s also a puzzle because these subtleties seem inconsequential in this first iteration of the feature. Like, if Deep Fusion only worked occasionally and only worked in very specific ways, why did Apple make such a big deal about it at the iPhone event, and why did it take two extra months of development before Deep Fusion was available to the public?

Advertisement

I don’t actually know the answers to these questions, although I do have a theory. My theory is that Deep Fusion really is some of the most sophisticated computational photography technology that Apple has ever built, and at the moment, we’re just scratching the surface of its capabilities. I can see a future in which Apple builds on the foundation of Deep Fusion and creates much more impressive features. The photography features on the Google Pixel 4—namely Super Res Zoom—might even offer a glimpse into this future. Google’s Super Res zoom combines the Pixel 4's optical and digital zoom to offer super-sharp zoom shots. If Apple is already exploring ways to fuse images together for better detail, I could imagine the company might look at ways to improve its zoom features even more, especially now that the flagship iPhone has three cameras on the back.

But this is all conjecture. For now, it doesn’t matter if Deep Fusion blows your mind or not. If you own an iPhone 11 or an iPhone 11 Pro, you get the feature for free when you upgrade to iOS 13.2. If you own an older iPhone, though, you absolutely shouldn’t upgrade just to see what Deep Fusion is all about. The feature will surely evolve and whatever we see next year will most likely be more impressive than what’s now available. That said, the camera models on both the iPhone 11 and the iPhone 11 Pro are impressive as hell. You might upgrade just to get that Ultra Wide Camera, which is new to this year’s iPhone lineup. I know I’ve been having a blast with it.

Advertisement
Photo: Adam Clark Estes (Gizmodo)

Let's block ads! (Why?)



Business - Latest - Google News
October 30, 2019 at 11:00PM
https://ift.tt/2BSezzb

A Deep Look Into the iPhone's New Deep Fusion Feature - Gizmodo
Business - Latest - Google News
https://ift.tt/2Rx7A4Y

Bagikan Berita Ini

0 Response to "A Deep Look Into the iPhone's New Deep Fusion Feature - Gizmodo"

Post a Comment

Powered by Blogger.