The lidar-equipped Apple iPad 11 Pro has deeper and more detailed recognition of 3D objects. For this, ToF cameras are also used with different technologies for measuring the range of the point position.
Apple has pioneered a race to use lidars across a variety of products. Apple has built lidar into its iPad Pro 11, and now it seems like everyone wants to use lidar.
Apple’s maneuver and reaction to it have impacted the entire electronics industry. Chip and sensor suppliers are reconsidering their plans. Some have already changed their business models.
But what is lidar? Apple chose this term to describe a new sensor that measures depth — in other words, a sensor that recognizes objects in three dimensions.
Lidar in tablets and smartphones is essentially “just a kind of technology for recognizing three-dimensional objects,” explained Pierre Cambu, chief analyst for the company’s photonics and display department at Yole Développement.
Many engineers in various fields – be they self-driving cars, smartphones, or tablets – have studied ways to use the “depth” information in the data along with the pixels and colors produced by sensors that recognize two-dimensional objects. For example, lidars are used in the automotive industry to determine distances to objects located around highly automated vehicles.
Apple’s recently unveiled iPad 11 Pro uses lidar to enhance its augmented reality experience. This lidar is designed for Apple’s ARkit 3.5 development kit.
The special technology used to determine and measure depth makes this lidar essential. It is this technology that has caused this sensor to be monitored by other mobile device manufacturers, including Huawei and Vivo.
Different methods of recognition of three-dimensional objects
For 3D object recognition engineers use different methods… These include stereo vision, light structuring and time-of-flight (ToF) measurements. To complicate matters further, ToF technology is now available in two versions: iToF and dToF. iToF measures the phase shift and dToF the direct flight time.
Apple iPhone X features structured light facial recognition. Its depth estimation works with an IR emitter that sends out 30,000 points in a fixed order. The dots are invisible to humans, but not to the IR camera, which reads deformations in the template reflecting off surfaces at different depths.
With the release of iPad 11 Pro, 3D object recognition has become deeper and more detailed thanks to the use of dToF technology. Apple’s iPad Pro is the only consumer product to date using dToF technology. Many smartphone manufacturers already use iToF for better photos (ToF camera can blur the background in photos), but not dToF.
The lighting structuring method provides high accuracy in determining the depth, but its disadvantage is the complex post-processing required to calculate the depth when matching with the sample.
The advantage of the dToF method, on the other hand, is its ability to provide ease of post-processing. However, it is believed that the complexity of using this technology lies in the fact that photodetectors with high sensitivity (for example, single-photon avalanche photodiodes) and large size are required to measure flight time with a small number of photons in one dimension.
Currently, iToF is the most common 3D object recognition method. It provides high accuracy in depth determination, ease of post-processing and high spatial resolution using small-sized photo detectors widely used in 2D image sensors.
However, Apple has taken a less beaten path to recognize 3D objects. The company decided to use structured light to identify faces. For augmented reality, Apple uses dToF.
So, here are the questions that everyone in the world of 3D object recognition is asking: What is dToF? What is this technology made of? And who developed the components?
Analysis from System Plus Consulting, a division of Yole Développement, presented the details of the 3D object recognition module in the Apple iPad 11 Pro.
In an interview with the EE Times, Sylvain Hallero, senior technology and value analyst at System Plus, explained that the iPad 11 Pro’s lidar consists of a vertical emitting laser (VCSEL) from Lumentum and a Sony-developed receiver, a near-field CMOS sensor. infrared range (NIR), which measures flight time.
Near-IR CMOS sensor using Sony single-photon avalanche photodiodes
Cutting through a Sony CMOS sensor, as part of a study of its device, was a revelation to experts following the development of photonics. Including Cambou, a division of Yole.
Cutting through a Sony CMOS sensor, as part of a study of its device, was a revelation to experts following the development of photonics. Including for Kambu, who works for Yole. In its recent blog, he wrote that what “looked like an old device with iToF and 10-micron pixels” turned out to be the first consumer CMOS sensor with an intra-pixel connection – and, yes, we are talking about an array of single-photon avalanche diodes. “
“Intra-pixel connection” is an important property. Sony is the first to integrate a CMOS sensor using 3D stacking for ToF sensors. The intra-pixel connection allowed the CMOS image sensor to be placed together with the logic substrate. Thanks to the integrated logic array, the sensor can make simple calculations of the distance between the iPad and objects, Hallero explained.
Sony has made its way into the dToF segment with the development of a new generation of CMOS sensors with 10-micron-pixel, 30-kilopixel single-photon avalanche diode arrays.
This isn’t just a technological feat of Sony, though. It is also about the fact that Sony has changed the concept of its business.
Traditionally, the Japanese giant has worked more on image processing. not over scanning. However, Kambu states that “Sony renamed its semiconductor business a year ago to Imaging and Scanning.” Next, Sony has taken two steps. The first was the delivery of iToF sensors to Huawei and Samsung in 2019, which earned Sony about $ 300 million. The second step is winning the competition for the development of dToF sensors for the Apple iPad. “
Kambu suspects that dToF sensors could eventually enter the iPhone. In his analysis, he notes that “Sony’s sensor revenues are likely to surpass $ 1 billion in 2020 in a market that has just surpassed the $ 10 billion mark. This successful transition from image processing to scanning has been instrumental in Sony’s continued solidification in the CMOS sensor market. All this will be the basis for the prosperity of the new division. “
Lumentum vertical emitting lasers
In addition to Sony’s CMOS sensor, the lidar is equipped with vertical emitting lasers from Lumentum. In the design of these lasers there are several electrodes connected to the emitter.
Taha Ayari, Technology and Value Analyst at System Plus, has focused on a new processing step (called a mesa contact) that Lumentum has added to its vertical laser. A Lumentum laser emits light from the surface of the substrate. Fine tuning the emission requires power management and the application of various controls to the emitting arrays. Ayari believes Lumentum added this technology to improve component testing on substrates.
For pulse generation and power and beamform control the emitter uses an IC driver from Texas Instruments. The circuit uses a wafer-based package (WLCSP) molded on five sides.
Finally, System Plus claims that the Lumentum laser uses a new diffractive optical element (DOE) from Himax to create the dot pattern.
On the following pages, we share a few slides created by System Plus that illustrate what was found during disassembly, and we have added a few slides describing the prospects for the lidar market.
Apple iPad Pro features: RGB main camera module, wide camera module and rear LiDAR module
Here’s how the cross section of the LiDAR module looks like.
Image Sensor Overview
What VCSEL Die Looks like
VCSEL Driver IC packaged in fan-in WLCSP 5-Side VCSEL Driver Die
Diffractive Optical Element
We are perhaps the strongest center of competence in the development of automotive electronics in Russia. Now we are actively growing and have opened many vacancies (about 30, including in the regions), such as software engineer, design engineer, leading development engineer (DSP programmer), etc.
We have a lot of interesting tasks from car manufacturers and concerns that are driving the industry. If you want to grow as a specialist and learn from the best, we will be glad to see you in our team. We are also ready to share our expertise, the most important thing that happens in automotive. Ask us any questions, we will answer, we will discuss.
Read more helpful articles:
- Free Online Courses in Automotive, Aerospace, Robotics and Engineering (50+)
- [Прогноз] Transport of the future (short-term, medium-term, long-term horizons)
- The best materials on hacking cars from DEF CON 2018-2019
- [Прогноз] Motornet – data exchange network for robotic transport
- Companies spent $ 16 billion on self-driving cars to capture $ 8 trillion market
- Cameras or lasers
- Open source autonomous cars
- McKinsey: rethinking electronics software and architecture in automotive
- Another OS war is already underway under the hood of cars
- Program code in the car
- There are more lines of code in a modern car than …