Time-of-flight (ToF) is a ranging technique whereby a sensing element sends a laser beam to the target, and a part of the reflected light returns to the receiver.

The measured time indicates the distance to the object. A 3D ToF provides high-resolution 3D imaging at low cost for various applications, including computer graphics, human-machine interaction (HMI), and computer vision. A powerful 3D vision can solve several problems associated with 2D vision as it can effortlessly separate the foreground from the background. In this article, we will learn about ADI’s 3D depth sensing ToF technology with comprehensive support towards CMOS imagers, laser drivers, software and hardware-based depth computing, and complete systems.

Time of flight technology - Overview

3D ToF uses high-power optical pulses in durations of nanoseconds to capture depth information, typically over short distances from a scene of interest. It is a type of scanner-less LIDAR (light detection and ranging) that can estimate distances to targets. Range sensing (non-contact distance measurement) based on laser light uses three primary methods: triangulation, time of flight, and modulation.

time-of-flight-measurement
Figure 1: Time of flight measurement

A ToF camera measures distance by actively illuminating an object with a modulated light source, such as a laser, and capturing the reflected light with a sensor sensitive to the laser's wavelength (Figure 1). The sensor measures the time lag between the emission of light and when the camera receives the reflected light. The time delay is proportional to twice the distance between the camera and the object (round-trip). You can estimate the depth by:
formula

Where c represents the speed of light, ΔT represents the time delay (ToF), and d is the measured depth. A ToF camera calculates the time difference between the emitted and returned signals. You can use two methods to calculate the time (T): continuous wave (CW) and pulse. The CW technique employs a periodic modulated signal for active lighting, and homodyne demodulation of the received signal helps to determine the phase shift of the reflected light. An illumination source generates N short light pulses, which are reflected in a sensor with an electronic shutter, allowing light capture in a series of brief temporal windows.

Depth sensing technologies

Including depth information in a 2D image allows the extraction of useful information. Depth information lets you track individuals' facial and body features, allowing higher quality and more reliable face recognition for security authentication. Higher resolution and depth accuracy lend to a better classification algorithm.

The ToF camera (figure 2) comprises several elements such as a light source, Laser driver, and ToF sensors. The light source emits light in the near-infrared domain. The source can be an edge-emitting laser diode or a vertical-cavity surface-emitting laser (VCSEL).

The laser driver modulates the intensity of the light emitted by the light source. The ToF Sensor with a pixel array collects the returning light from the scene and outputs values for each pixel. The lens focuses the returning light on the sensor array. Lastly, a processing algorithm that converts raw output frames from the sensor into depth images or points clouds.

components-of-tof-including-depth-processor
Figure 2: Components of ToF, including depth processor

The ToF camera collects light generated by active illumination. The camera’s overall performance depends upon the uniformity and efficiency of the light collection on the pixel array. The lens must have high transmission, strong collecting power, and low stray light. Table 1 shows different System-Level Components of 3D Time of Flight Cameras and their features.

system-level-components-of-tof-cameras
Table 1: System-level components of ToF cameras

Analog devices ToF system- ToF signal processing device

ADI’s ToF technology is a pulse-based ToF CCD system (Figure 3) utilizing a high-performance ToF CCD and the ADDI9036 (analog front end), a complete ToF signal processing device integrating a 12-bit ADC, the depth processor (that processes the raw image data from the CCD into depth/pixel data), as well as a high precision clock generator that generates the timing for both the CCD and the laser. The precision timing core of the timing generator allows the adjustment of clocks and LD output with approximately 174 ps resolution at a clock frequency of 45 MHz.

block-diagram-of-the-adi-tof-system
Figure 3: Block diagram of the ADI ToF system

The ADI ToF system uses a ToF CCD which is sensitive to 940 nm light, allowing it to capture more data in outdoor environments or areas with intense ambient light. ADI’s ToF system differentiates itself from other solutions by using a 640 × 480 resolution ToF sensor, which is 4× higher than most other ToF solutions available in the market, and by using a sensor with increased sensitivity to light at the wavelength of 940 nm. Ambient light will significantly reduce the SNR of a reflected signal, especially in very high ambient light conditions.

A pseudo-randomization algorithm combined with special image processing integrated into the depth processor enables interference cancellation. This algorithm allows for multiple ToF systems to operate in the same environment.

Development platform

The AD-FXTOF1-EBZ is a depth-sensing solution suitable for developing 3D computer vision systems. It uses a VGA CCD (Figure 4) that enables the capture of a 640×480 depth map of a scene at 30 frames per second, providing up to 4× higher resolution than most other ToF systems. You can connect the AD-FXTOF1-EBZ development kit to many processor boards for computer vision applications development and system evaluation. It can be used for 3D software and algorithm development if paired with Nvidia or Raspberry family processor boards. The kit’s VGA resolution enables the detection of objects at a higher level of granularity than other 3D ToF solutions, multiple range detection modes for increased accuracy, and an ability to detect depth in strong ambient light conditions.

ad-fxtof1-ebz3d-tof-development-platform
Figure 4: AD-FXTOF1-EBZ3D ToF development platform

The provided SDK supports OpenCV, Python®, MATLAB®, Open3D, and RoS wrappers so that developers can use them to simplify application development.

Emerging applications of ToF

Logistics, quality inspection, navigation, robotics, facial recognition, security, surveillance, safety, healthcare, and driver monitoring are all application use cases that can leverage 3D depth sensing ToF technology. Combining high-resolution depth data with powerful classification algorithms and AI will uncover new applications. An essential application of depth sensing will be in the industrial, manufacturing, and construction process. The ability to accurately dimension and classify objects in real-time through a production process is not trivial. Accurate depth sensing can determine the space utilisation of warehouse bays.

Products that come off a production line must be dimensioned quickly for transfer. High-resolution depth sensing will allow the determination of edges and lines of target objects in real-time and fast volume calculations. The use of smart sensors, especially depth sensors, is becoming more and more ubiquitous within manufacturing, as well as transportation and logistics. From industrial machine vision for quality inspection to volumetric detection for asset management to navigation for autonomous manufacturing, the manufacturing industry is adopting these sensing technologies and moving toward the highest resolution systems designed for harsh industrial environments.

Industrial application- Autonomous guided vehicle (AGV) using ADI's time of flight (ToF)

The ToF module allows the robot to pick and place objects on a table, navigate through obstacles, and place the objects in another location. ToF cameras help AGVs capture depth imaging data and perceive their operating environment. AGVs can execute critical tasks with accuracy, speed, and convenience. The cameras help robots to perform localization, navigation, object detection, mapping, navigation, and odometry.

Figure 5 shows the subdivision of the robotic system (AGV) in industrial environments into different classes such as actuators, consisting of the platform and the arm, sensors, computers, and interfaces to the human worker. A torque-controlled manipulator is required for peg-in-hole operations and overcoming perception uncertainties. The mobile platform enables safe navigation, with the ToF camera a key sensory device for perceiving the operating environment. The performance discrepancies of different cameras influence the optimal camera mounting angle.

block-diagram-of-tof-interfaced-with-processor
Figure 5: Block diagram of ToF interfaced with processor

The robot communicates through the Transmission Control Protocol (TCP), which integrates the pick coordinate information from machine vision. The ToF camera provides 3D information about obstacles. After ground testing and calibration, these cameras are mounted and integrated within a mobile robot. After converting them into Cartesian coordinates, a workspace grid map includes the camera data. The workspace is a two-dimensional (2D) world map divided into a grid or cells, where a graph search algorithm defines a collision-free path. This path is a sequence of cells the AGV can ply to reach the target.

Stereo cameras provide industrial-grade quality for vision-guided robotics applications in factory automation and logistics. These cameras help with 3D vision technology adoption for robotic applications ranging from bin picking to navigation.

Automotive application

ToF technologies are ubiquitous in exterior and interior sensing automotive applications. In exterior sensing, the increasing adoption of autonomous driving makes it apparent that ToF cameras, RADAR, thermal cameras, LiDAR, and stereo cameras complement 2D cameras. The phalanx of sensors will cover all the “blind spots” that may occur in an automotive environment. ToF technology includes advanced parking assist and ADAS exterior cocooning.

in-cabin-sensor-locations
Figure 6: In-cabin sensor locations

Present-day comfort requirements and safety needs drive Interior sensing use cases of ToF. The latest technologies target both the driver and the passenger, their cognitive and biomechanical states, and in-cabin monitoring. ToF comfort functions include hand position interaction (HMI) for the sunroof, air-conditioning, and radio operation; detecting objects left in the vehicle, personalization via the body, face, and head monitoring; and parcel recognition and classification. In-car security functions concentrate on anti-spoof face detection for driver identification.

SharePostPost

Stay informed


Keep up to date on the latest information and exclusive offers!

Subscribe now

Data Protection & Privacy Policy

Thanks for subscribing

Well done! You are now part of an elite group who receive the latest info on products, technologies and applications straight to your inbox.

Technical Resources

Articles, eBooks, Webinars, and more.
Keeping you on top of innovations.