What do 3D sensors do in smartphones? Parsing

Increasingly, we see so-called 3D sensors in smartphones, or depth sensors. Most of them are also called ToF-sensors, similar to the technology of the same name. According to rumors, such a sensor will be installed in the new iPhone (it is called LiDAR there, we talked about this in more detail in another article). These sensors are quite expensive, but not everyone understands why they are needed. Manufacturers claim that sensors allow you to take better photos and portraits or add chips to augmented reality. But is it really so?

Today we will discuss why 3D sensors are needed in smartphones, how it works, and of course, we will conduct several tests and check the manufacturers’ statements.

What is a 3D sensor (depth sensor)

First, let’s figure it out, what is a 3D sensor? Cameras capture a projection of the surrounding world onto a plane. From the photograph alone, one cannot understand the real size of the object – whether it is the size of a bottle or the size of the Leaning Tower of Pisa. And the distance to it is also difficult to understand.

In order to understand the real size of objects in the photo, the scale of shooting, to distinguish what is closer to the camera, and what is next, 3D sensors are needed. They have long been actively used in robotics, autonomous transport, games, medicine and many other places. Moreover, our eyes are also a 3D sensor. At the same time, unlike LiDAR and ToF sensors in smartphones, eyes are a passive 3D sensor. That is, it does not emit any light, but works only on the basis of the incoming light. Only thanks to this we can somehow move in space and interact with surrounding objects. Now 3D sensors have appeared in smartphones.

How does ToF work?

LiDAR in iPad, as well as all 3D sensors in Android smartphones, are time-of-flight or ToF sensors for short. They determine the distance to objects around, directly measuring how long it takes light to travel from the camera to the object and back. This is very similar to an echo in a cave, it also returns to us with a delay after being reflected from the walls. It takes 3 nanoseconds to fly 1 meter of light, and 30 picoseconds for 1 cm. Everything seems to be clear. But there’s a problem.

These are very small intervals. How can a camera measure this? She will not take a billion frames per second, and then compare them? There are 2 main approaches to solving this problem: dToF (direct ToF) and iToF (indirect ToF). And to intrigue you even more: the vast majority of Android smartphones use just iToF sensors, while LiDAR in Apple iPad and most likely in upcoming iPhones is a rare representative of the dToF sensor family. So how are they different?

iToF – indirect ToF

Let’s start with iToF. In such sensors, the emitter sends high-frequency modulated light, that is, this light is constantly turned on and off at a frequency of tens of millions of times per second. Due to the fact that the light needs time to fly to the object and back, the phase, that is, this state is somewhere between the on and off, the light that returned to the camera is slightly different from the phase of the light at the moment of sending. On the sensor, the original and reflected back signals from the object are superimposed on each other, and due to this, the phase shift is determined, which allows us to understand the distance to each point of the object.

dToF – direct ToF

dToF works a little differently. These sensors directly measure the time difference between sending light and detecting its reflection on the sensor. For this, the so-called SPAD: single photon avalanche diodes are used. They can detect extremely small pulses of light, in fact, even catch single photons. These SPADs are located in each pixel of the sensor. And as an emitter in such sensors, as a rule, the so-called VCSEL – Vertical Cavity, Surface Emitting Laser are used. This is a laser emitter similar to those used in laser mice and many other places. The dToF sensor in LiDAR was developed in collaboration with Sony and is the first mass-produced commercial dToF sensor.

Why the iPad uses a dToF sensor is anyone’s guess, but let’s highlight the benefits of such a sensor. Firstly, unlike the iToF sensor, the emitter does not emit a solid wall of light, but only shines in separate directions, which saves battery life. Second, the dToF sensor is less prone to errors in depth measurement due to so-called multipath interference. This is a typical problem with iToF sensors. It occurs due to the reflection of light between objects before it enters the sensor and distorts the sensor’s measurements.

How it works, we figured it out, now let’s see why 3D sensors are used in smartphones at all.

Why is it needed in smartphones

1. Safety

We owe the first massive introduction of 3D sensors in smartphones to Apple and Face ID technology. Face recognition using three-dimensional data is much more accurate and reliable than classic face recognition from a photo. For Face ID, Apple uses structured lighting technology, which we will dwell on in more detail sometime next time.

2. AR

Most manufacturers claim that better and more accurate augmented reality mode is the main task of 3D sensors. Moreover, it is also supported directly by Google. Just recently, they presented an upcoming update to their ARCore augmented reality library, which allows more realistic placement of virtual objects in reality and interact with real objects.

For the same task, Apple has built LiDAR into the iPad Pro. This can be done without a 3D sensor, but with it everything works more accurately and more reliably, plus the task becomes computationally much easier and relieves the processor. The 3D sensor takes AR to another level.

3. Photo enhancement

A number of manufacturers such as Samsung and HUAWEI say the 3D sensor is used primarily for better background blur and more accurate autofocus when shooting video. In other words, it allows you to enhance the quality of regular photos and videos.

4. Other

Some smartphones have open access to sensor data, so there are more and more applications offering new applications. So, for example, with the help of external applications, a 3D sensor can be used for object measurement, 3D scanning and motion tracking. There is even an application that allows you to make a night vision device out of your smartphone.

Tests

We figured out how it works in theory, let’s now see how it works in practice, and if there is any sense from these expensive 3D sensors in flagships. For tests, we took the Redmi Note 9S, it has a ToF sensor and we took a few pictures in portrait mode, but in the second case we simply covered the 3D camera with our finger. And here’s what happened.

It’s simple – the blur is really bigger and better if ToF works.

And for the frequency of the experiment, we took the Samsung Galaxy S20 Ultra, which also received a ToF camera.

And find at least one difference?

What happens? The fact is that depending on the manufacturer, the ToF camera is used in different ways and to varying degrees.

We can say that some smartphone manufacturers place ToF sensors in their smartphones not for marketing, to add another camera, but rather just in case. And then the algorithms decide whether to use this camera or not?

At the same time, at the moment there is no need for LiDAR or ToF cameras. So this is probably a little more marketing.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *