News: LiDAR vs. 3D ToF Sensors — How Apple Is Making AR Better for Smartphones

LiDAR vs. 3D ToF Sensors — How Apple Is Making AR Better for Smartphones

Apple has implemented a new sensor on the rear camera of its fourth-generation iPad Pro, and it's pretty exciting. It's called the LiDAR Scanner, a scanning "light-detection and ranging" sensor, and you may very well be acquainted with it if you follow any driverless car news. Will we also get it on the upcoming iPhone 12 Pro?

First off, a LiDAR is a sensor that uses lasers to create a radar that surveys the environment and accurately measures distance. And if that sounds familiar, it's because Android phones over the last few years have been including 3D time-of-flight sensors in their cameras, a type of scannerless LiDAR.

Both LiDAR scanners and scannerless LiDARs like the 3D ToF sensors use light and the time-of-flight technique to measure distance. Still, the scanner type will be more accurate since it uses multiple laser pulses versus just one large flash laser pulse.

For Android phones with the 3D ToF component, they have seen considerable improvements in augmented reality (AR) and bokeh-style portrait mode photos. Samsung has made an enormous push to include these sensors in its smartphones, and the Galaxy Note 10+, Galaxy S20+, and Galaxy S20 Ultra all have one.

While the iPhone 12 Pro and 12 Pro Max will likely be including a LiDAR Scanner, why is Apple opting for a more expensive component? Well, the technologies are very similar, but how they capture 3D information is different. TLDR: Apple's method is better.

LiDAR Scanning vs. Scannerless 3D ToF

A LiDAR scanner and 3D time-of-flight sensor, at their core, are the same thing. Both are laser-based sensors that use the time of flight to determine distances of faraway objects. Using infrared (IR) lasers, they measure the time it takes for light to bounce from objects in the captured environment.

With sophisticated algorithms, these sensors can help create a 3D point cloud of the surrounding area. Your smartphone can then relay the information to first- and third-party apps. That data can help better determine the depth of field in photos, for one example. Another one? It helps create a more accurate, immersive AR experience, which places and manages artificial objects in whatever real-world scenery the camera lens is pointing at.

The differences are in the way both sensors capture 3D information.

Android phones use flash-based scannerless LiDAR systems. These systems, such as Samsung's DepthVision, illuminate the area in a single pulse of IR light (which is why you don't see it). All reflections bounced off of objects in the scene return back to the sensor where they are analyzed, and a 3D map is created. Sensors using this method are cheaper to manufacture, which is why you have seen the recent rise of their inclusion in smartphones.

Samsung Galaxy S20 Ultra and Galaxy Note 10+. Image by Jon Knight/Gadget Hacks

The new 2020 iPad Pro (and hopefully, the upcoming iPhone 12 Pro models) use a scanning LiDAR system. Instead of a single flash pulse, it sends a smaller pulse in one area and uses the time of flight to determine distance. It then moves the light around the rest of the environment at "nano-second speeds" to capture the rest of the surroundings.

Since rotating LiDARs are bulky and vulnerable to external vibrations, Apple uses a solid-state LiDAR, a type without moving parts. There are currently two methods that Apple may use: MEMS (microelectromechanical systems) or OPA (optical phased array).

MEMS-based scanners use micro-mirrors to control the direction of emission and focus. OPA-based ones use optical emitters to send out bursts of light particles in a specific pattern to create directional emission. The size and focus of the emission can be modified by the system to get every point of the environment.

We don't know which method Apple uses since it didn't release info on that, but for the sake of comparing it to the 3D ToF sensor, the advantages are nearly the same.

Advantages to Apple's Method

When light is emitted, it isn't just intended reflections that arrive back. Instead, the sensors will receive indirect reflections, off-angle reflections, and multiple reflections caused by the environment.

While algorithms help to minimize the signal-to-noise ratio of reflected light, these complications become more challenging to calculate, especially when using a single flash of light that receives all of the reflections at the same time. The result with a single flash laser pulse is that distances can be mismeasured, often at a greater depth, than they actually are, disturbing AR immersion and disrupting the accuracy of bokeh effects in photos and videos.

That's why a scanning LiDAR symptom, such as Apple's LiDAR Scanner, is an improvement over 3D ToF sensors. The LiDAR Scanner maps the area point by point, minimizing the noise received and therefore resulting in a more accurate reading and 3D point cloud.

Image via Apple

3D ToF sensors used in Android smartphones are also vulnerable to light reflected from a specular surface, a flat surface which creates a mirror-like reflection.

Another thing to consider with LiDAR is how far it can measure. Apple's initial release of the technology is limited to five meters, the same as most 3D ToF sensors. However, point-based LiDARs can reach even greater distances than scannerless LiDARs, so in the future, we could see Apple beating its competition in range.

Just updated your iPhone? You'll find new Apple Intelligence capabilities, sudoku puzzles, Camera Control enhancements, volume control limits, layered Voice Memo recordings, and other useful features. Find out what's new and changed on your iPhone with the iOS 18.2 update.

Cover image via MKBHD/YouTube

Be the First to Comment

Share Your Thoughts

  • Hot
  • Latest