In revealing its new iPhone 14 lineup, Apple makes the usual claims about its latest models having the best cameras yet, but what’s interesting is how this is being achieved: a combination of clever processing and the brute force of utilizing bigger sensors.
The company quotes figures of at least 2x improvement in low light through the use of its latest image combination technology, dubbed ‘Photonic Engine.’ Details of this process are lacking, beyond talk of the images being combined earlier in the process.
|Apple’s iPhone 14 Pro uses a 48MP sensor and clever processing to achieve improvements in quality.|
Apple tends to be cagey about specifics, so we can’t know how much of this claimed 2x improvement comes from combining more shots or more sophisticated noise reduction, but there are hints that the data is being combined before demosaicing, based on what the company said about its iPhone 14 Pro models.
All of the promised improvements beyond this 2x figure come from either the use of a larger sensor or brighter aperture.
The Pro models gain larger sensors for their main cameras, jumping from 12MP Type 1/1.7 (7.5×5.7mm) to 48MP Type 1/1.28 (9.8×7.3mm) quad-pixel chips. The aperture is reduced from F1.5 to F1.79 but this is brighter in equivalent terms than before: the sensor is nearly twice the size, which more than makes up for the ~0.3EV slower F-number.
|The Quad Bayer design (right) uses an oversized version of the conventional Bayer pattern (left). Each color patch extends over four photodiodes, each of which has its own microlens in front of it. Image: adapted from Sony Semiconductor illustration|
The camera will primarily deliver 12MP images by combining quartets of pixels to give the 2.44μm pixels discussed in Apple’s presentation, but can also deliver 48MP ProRaw files, from the individual 1.22μm photosites. This suggests ability to access the full native resolution in a file generated after image merging suggests Apple is combining images before the demosaicing process.
|Although delivering 12MP in its normal shooting mode, photographers will be able to access all 48MP from the iPhone 14 Pro using the ProRAW format.|
The main camera offers a mode where it crops in to the central 12MP section of the sensor, but given it’s a Quad Bayer pattern, this is effectively 12 million photosites behind a 3MP Bayer pattern, so there’s more interpolation to be done than usual. Don’t expect an image comparable to 48MP Bayer from the full sensor or quite the level of detail you’d expect from ’12MP’ in crop mode.
There’s a larger sensor, too for the non-Pro iPhone 14 and 14 Plus. These receive main cameras with comparable specs to those in last year’s iPhone 13 Pro. Specifically this means 12MP Type 1/1.7 (7.6×5.7mm) Stacked CMOS chips nestled behind faster F1.5 lenses. Again, this is an increase in sensor size, and along with F1.5 lenses, account for the quoted 48% increase in light capture, compared with the iPhone 13.
|This example from Apple shows the 12MP crop section (highlighted) of the iPhone 14 Pro’s 48MP main camera (24mm equiv).|
While the bulk of Apple’s promise of 2x-or-greater low light performance for all its cameras stem from the combinational / computational ‘Photonic Engjne’ tech, everything above ‘2x’ comes from the use of larger sensors.
It’s interesting to note that buyers of the non-Pro iPhones will end up with 12MP sensors of the same size that used to be used in Canon’s enthusiast PowerShot G models. But a decade on from the G15, the iPhone offers a more modern sensor at the same size and resolution, mounted behind a slightly faster lens as well as the ability to shoot and merge multiple images without the user necessarily knowing.
But for all the cleverness of Apple’s multi-shot ‘Photonic’ software, throwing light on the subject through the use of bigger sensors still has a part to play.
Powered by WPeMatico