Why does the iPhone have a chip that kills FaceID?
Let's start from the beginning: what the FaceID system looks like and how it works:
In terms of hardware, this system consists of a pair of cameras – IR and RGB, and two types of IR emitters – a regular backlight and a specialized spotlight.
We peel off the skin from the iPhone X, remove all unnecessary things, and see the front camera unit in its naked form. The IR illumination exists separately, but all the other components are in our hands – they are securely fixed in this metal frame.
Here we see an IR camera and an IR dot projector. It is this infrared pair that powers the TrueDepth and FaceID systems. And the main character of today's story is precisely the dot projector.
Brief principle of how FaceID works
The IR Dot Spotlight does exactly what its name suggests – it spits out tens of thousands of infrared dots into the world on command. And the IR camera, which gives this commandimmediately takes photographs of these points.
Knowing the optical characteristics of the spotlight, camera and the distance between them, ISP in the iPhone processor can estimate how far away each point is. I don’t fully understand the math of the process myself, and it would be difficult to reverse it — but taking a series of images with different patterns of points allows you to very accurately guess the position of each individual point, and thus create a complete depth map. Without LIDAR and without ToF.
It is the depth map that allows FaceID not to be fooled by printed photographs. A photograph is flat, but a face has a relief, and for FaceID, a topographic map of the face is more important than its coloring.
However, the system also looks at the color of the face. The IR camera is hardware synchronized with the RGB camera, and both cameras shoot the face simultaneously. And the iPhone can blink not only the IR spotlight, but also the IR illumination – and shoot the entire face in the IR spectrum.
The depth map itself is quite rough, and analyzing the texture of the face in IR and RGB images using a neural network allows us to both refine the depth and better understand details such as facial expressions – both in daylight and in the dark.
If this system looks familiar, it's probably because it's ripped straight from the Xbox 360's Kinect. Only that Kinect was a huge beast, and here it's been shrunk down to the size of a notch in a smartphone screen.
This was done by PrimeSense, the company that developed the technologies behind Kinect 1, and was then bought by Apple for $350 million. In its entirety, with all its patents, developments, employees, and other guts.
We disassemble the guts of the spotlight
Let's go deeper into the reverse: we take out the dot projector from the iPhone X camera unit and disassemble it into its component parts. It consists of an FPC cable, a radiating assembly, and an optical assembly.
The cable is completely passive, and therefore of little interest. It is soldered to the radiating assembly, and outputs signals to the FPC connector, which is connected to the iPhone X motherboard. The connector has a 0.35mm pitch, is custom (Apple are assholes), and looks like it was made by JAE.
Let's look at the main components of optics:
And let's see what's inside the emitter:
And so the role of the MOSFET and the mysterious chip interested me. Why? Because it is unclear what they are doing there.
The first obvious option is that the mysterious chip is memory for the serial number and calibration data. The chip has a typical I2C interface for memory, and there is definitely memory inside. Spotlights have serial numbers, which can be used to determine the production date, among other things — and if the spotlight is replaced entirely, the iPhone will see a mismatch in the serial number and refuse to work with the replacement. But the most common I2C EEPROM is found in a tiny WLCSP-4 case — and it can even be blocked from being overwritten, if you really want to. Therefore, the chip cannot be simple memory. It definitely does something else.
The second obvious option is that the mysterious chip is a laser driver, and the MOSFET is its key. And yes, the MOSFET is indeed controlled by the chip. But the chip also can't be something critical like a laser driver.
Firstly, the MOSFET is in the gap of the common cathode of the laser assembly – and 4 separate anodes go directly to the cable and go further into the depths of the iPhone's layered board. And secondly, in the course of collecting data for the reverse, I came across various instructions from Chinese repairmen.
They didn't directly clarify the essence of the issue, but many of these instructions said: to repair a “broken” spotlight, you need to disassemble it, remove the MOSFET, and replace it with a jumper between the drain and source. The spotlight with the jumper inside will eventually work, and the FaceID functionality will be restored. And since the spotlight with a jumper instead of a MOSFET works fine, then what was that MOSFET doing there?
And it dawned on me: this was the whole point of the repair. The MOSFET is controlled by the chip – so at the chip's discretion it can break the laser power supply circuit, and thus break the spotlight. And the repair eliminates this break.
What's in a name?
Once it became clear that the mysterious chip paired with the MOSFET interferes with the normal operation of the floodlight, the question arises – why does it do this? Why do they put a chip in the floodlight that kills the floodlight?
For answers, I looked into the firmware of the ISP block in the iPhone processor – it is this that talks via I2C with the camera sensors and the spotlight.
First, I downloaded the iOS 15 firmware image for iPhone X, a fresh one. Firmware images for iPhone are essentially zip files. Inside, I found the ISP firmware I was looking for – as a file \Firmware\isp_bni\adc-nike-d22.im4p
. From the compressed im4p file, a binary was extracted in Mach-O format with AArch64 code inside. Mach-O, unlike the typical “firmware image for an unknown microcontroller”, is a documented executable file format similar to PE or ELF. There is no guesswork about the file structure, processor architecture or the address to which the code should be loaded. You just throw the file into Ghidra and everything is laid out on the shelves. Nice.
Then instinct took over and I decided to gut older firmware. And in the iOS 13 firmware image I found the file adc-nike-d22. Even the size was almost the same. Only the new firmware had more code – and the old one had less code, but there were symbols. All function names are in place. Always check for older versions!
There’s a lot of information in the ISP firmware, including how the iPhone communicates via I2C with various chips — the camera sensors, the camera PMU, the flash and autofocus control chips. And thanks to the symbols, we were able to extract the “names” of the various components of the system — and some of them correlate with materials from other parts of the firmware, as well as from other reverse engineers and repairmen. For example, the IR camera sensor is an STMicroelectronics VD56G0 “Savage.” The entire TrueDepth system is called “Pearl” in the code, and its main modules are named after characters from Romeo and Juliet. The IR spotlight is called “Romeo,” the IR camera is “Juliet,” and the IR illuminator is called “Rosaline.” The laser driver, which lives on the iPhone’s motherboard and powers both the lasers inside “Romeo” and the laser inside the “Rosaline” illuminator, is called “Rigel.”
The mysterious chip we are interested in? It has a name too. It is called “MamaBear” in the code, “MB” for short, and it seems that its functionality is quite simple. It lives on the I2C bus. It stores OTP data, including the serial number and various calibrations. It turns the MOSFET on and off by command. And it also measures… capacitance? Not temperature, it is not connected to the NTC thermistor at all, but capacitance. But capacitance of what?
The tragic death of Romeo
The answer to this question is again provided by Chinese circuits. The circuit from JCID shows that the Romeo module has three contacts for connecting the radiating assembly to the optical assembly. One is ground, and two more go directly to the MamaBear chip. These contacts pass through a special adapter on the side of the optical assembly, and end up at its very top — the diffraction optical element.
The diffraction beam splitter is uncontrollable and does not respond to current. But it has a capacity. And with the help of those three lines, this capacity can be measured. But why?
The point is how important this diffraction splitter is. The pattern of dots used by the spotlight is determined by the arrangement of tiny lasers, the “pits”, on the VCSEL crystal. And then this pattern is multiplied by the diffraction element, which makes hundreds of beams from a single beam.
So what happens if this diffraction element is torn off?
The beams will not split. Instead of hundreds of beams of laser beams, there will be one beam – but a hundred times more powerful. And it is still a laser. An infrared laser is more dangerous than a red one, because a person cannot see it – and therefore will not instinctively look away even from a dangerously powerful light source. And there is a non-zero chance that the characteristic pattern of dots will be burned into the user's retina in this case.
To prevent this, a killer chip is needed. After switching on, it constantly monitors the capacity of the diffraction element – and if the element is broken or damaged, the capacity flies out of the permissible limits, and the chip immediately cuts off the MOSFET and breaks the VCSEL supply. And since the element is located at the very top of the optical assembly, it is almost impossible to damage the rest of the assembly with an impact without breaking it and without breaking the contact.
After the laser is turned off, the chip burns a flag into its OTP, which marks the spotlight as defective – which means that the broken supply will remain broken forever. No commands from the ISP will have any power over it. The MOSFET will always be closed, and the spotlight will never work again.
The “MamaBear” chip, as the name suggests, is a protection chip. It is a “killswitch” for emergency stopping of the laser. It kills the spotlight to prevent the damaged laser device from shining into the user's eyes. And the “Juliet” module, left without its paired “Romeo”, loses its meaning in life – and the entire TrueDepth system becomes unusable.
The working days of the tech-priests
But this protection scheme has a flaw. The thing is that the dot projector is on the top edge of the device, and next to the speaker. If liquid gets inside the iPhone, then one of the most common places for this is there. And capacitive sensors are sensitive to conductive liquids. Therefore, it often happens that FaceID breaks after the device falls into water – even if the water ingress is minimal, and there is no other damage. “Romeo” simply misunderstood the situation, and Roskomnadzor did it in vain.
Such devices are taken in for repair. Often for unofficial repair. And since the iPhone checks the serial numbers of spare parts (hello, Apple), you can’t just swap the entire camera unit for a working unit from a donor. The phone will reject the new unit, and FaceID won’t work anyway. So, you have to somehow fix the old one. But how can you “resurrect” a spotlight that has intentionally disabled itself?
Manufacturers of unofficial repair tools have come up with a whole series of different rituals for this. And straight-armed tech-priests-repairmen religiously follow them, and perform microsurgery on this complex and calibrated optical system. The straightness of the hands required is unimaginable – the components inside are a few millimeters in size, and the optics are extremely sensitive. If the calibration drifts too much due to surgical interventions, the system will not work. There are no tools for software recalibration (hello, Apple) – you will either find a way to get to the original parameters, or you will be left without FaceID.
How does it work? Well, the first thing you need to do is read the OTP data from the original MamaBear chip.
The data is readable even if the projector considers itself faulty. To read the data, the Chinese make special “repair” programmers – which are supplied with sets of connectors-adapters, and work with a number of different components from different iPhone models, including projectors.
And then you need to do two things – deal with the MOSFET that is breaking the power supply, and replace the original protection chip. And there are many different methods for this.
For example, you can throw a jumper instead of the MOSFET, as in the photo above in the article, and replace the “MamaBear” chip by unsoldering the original FPC cable and replacing it with a special cable with a Chinese dummy chip.
The original “MamaBear” chip can remain inside, and powerlessly scream that the spotlight should not work under any circumstances. But it no longer has a MOSFET to forcibly turn off the spotlight, and the iPhone, for its part, sees only the Chinese chip – which gives a copy of the original data loaded by the programmer, and reports that the spotlight is definitely working.
Or you can rip out the entire “MamaBear” chip and put a Chinese two-in-one replacement in its place – it both closes the MOSFET contacts and sends a copy of the OTP data to the phone.
Well, there is an option with a minimum of soldering. An “adapter” with a dummy chip, which is placed between the original cable and the iPhone motherboard.
It does not solve the problem with MOSFET, but the Chinese have found an original approach to it by making “high-voltage” programmers.
Do you know how all sorts of ATtiny can be “unbricked” and rewritten using a special high-voltage programmer? Here the situation is completely different. The Chinese high-voltage programmer brutally and irreversibly “programs” the MOSFET inside the floodlight into a short circuit between the drain and the source.
All these different devices are made and promoted by different sellers of repair equipment. All kinds of dummy chips work only with “native” programmers, and programmers often have DRM features such as linking to an account and a limited number of “repairs” that you have to pay to replenish.
Do the repairmen know that with their repairs they are completely destroying the system invented by Apple to protect the user's eyes? Not really. They are not reverse engineers – they are shamans. They have no understanding of the principles of work. They have rituals and they have results, and that is enough for them. And the cunning reverse engineers from China are reluctant to share their secrets with the public. What I described in this article is known in full only to Apple engineers and a dozen Chinese “in the know”. And to me. And to you, now.
Why Apple Are Freaks
You know, I can't blame Apple engineers too much for their “killswitch” being too active and breaking spotlights that could have worked just fine. Lasers are a dangerous subject, and the idea of protecting the user from “worst-case scenarios” is absolutely sound. Although the implementation of this protection requires some improvement.
But Apple's Anti-Unauthorized Repair Policy Is the Worst of All Evils. If TrueDepth units could be easily swapped from device to device, without regard to serial numbers, then there would be practically no point in the terrible, perverted rituals of repair. Why bother with microsurgical soldering and dancing with programmers, if you can remove a completely working TrueDepth unit from another “donor” with a broken screen, install it in the client’s phone, fully restore functionality, and live peacefully? It would be easier for repairmen, and safer for device owners.
But Apple's history of ugly anti-repair behavior clearly shows that this will not happen. Well, unless the US or EU “Right to Repair” movements make the binding of spare parts by serial numbers illegal. And this is now possible. There is a high degree of truth in the joke that the EU adds more useful features to new iPhone models than Apple. So we will keep an eye on legislative initiatives.