Research Robots Applications Industries Technology About Contact Sales
← Back to Knowledge Base
Robotics Core

2D and 3D Lidar

Light Detection and Ranging (Lidar) serves as the primary sensory system for autonomous mobile robots. Whether utilizing planar 2D scanning for safety and mapping or volumetric 3D sensing for complex object recognition, Lidar enables AGVs to navigate dynamic environments with millimeter-level precision.

2D and 3D Lidar AGV

Core Concepts

Time of Flight (ToF)

The fundamental principle where laser pulses are emitted and the return time is measured. This calculation determines the exact distance to surfaces, forming the basis of spatial mapping.

2D Planar Scanning

A single laser beam rotates on a horizontal plane. This creates a "slice" of the world, ideal for floor-plan mapping and defining strict safety stop zones for AGVs.

3D Volumetric Cloud

Utilizes multiple laser channels stacked vertically to create a dense point cloud. This allows robots to detect overhanging obstacles, slopes, and identify specific objects.

SLAM Navigation

Simultaneous Localization and Mapping allows the robot to build a map of an unknown environment while keeping track of its current location within it using Lidar data.

Reflectivity & Remission

Lidar sensors measure the intensity of the returned light. This data helps distinguish between retro-reflective tape (navigation markers) and standard walls or dark obstacles.

Functional Safety (PL-d)

Safety-rated 2D Lidars are critical for OSHA compliance. They feature redundant internal checking to ensure the robot stops reliably if a human enters the warning zone.

How It Works: From Laser to Logic

The core of a Lidar system is the emitter-receiver pair. A laser diode emits a short pulse of light, often in the infrared spectrum (905nm or 1550nm). In mechanical Lidar, a rotating mirror deflects this beam across the environment, typically scanning 360 degrees horizontally.

When the light hits an object, it scatters, and a portion returns to the sensor. The system calculates distance based on the speed of light ($$ d = \frac{c \times t}{2} $$). This happens hundreds of thousands of times per second.

Solid-state Lidar , a newer technology, eliminates moving parts by using optical phased arrays or MEMS mirrors to steer the beam. This increases durability for industrial AMRs exposed to high vibration, though often with a reduced Field of View (FoV) compared to spinning mechanical units.

The resulting "Point Cloud" is fed into the robot's navigation stack. For 2D Lidar, this is an occupancy grid; for 3D Lidar, it is a volumetric voxel map used for complex path planning.

Technical Diagram of Lidar Operations

Real-World Applications

Warehouse Intralogistics

2D Safety Lidars are standard on autonomous forklifts. They create dynamic "protective fields" that lengthen as the vehicle speeds up, ensuring the AGV can stop safely before hitting a pedestrian or pallet.

Outdoor AMR Delivery

Last-mile delivery robots utilize 3D Lidar to navigate sidewalks. The vertical data helps them detect curbs, drop-offs, and distinguish between a flat paper bag on the ground and a concrete block.

Cleanroom & Hospital

In sterile environments, high-precision Lidar allows robots to navigate narrow corridors without physical guide tape. 3D perception is crucial here to detect hanging wires or open cabinet drawers that 2D scanners might miss.

Heavy Manufacturing

Large-scale parts transport robots use multi-Lidar fusion (combining data from 2-4 sensors) to achieve a complete 360-degree safety cocoon around massive payloads that obstruct the robot's primary view.

Frequently Asked Questions

What is the primary difference between 2D and 3D Lidar for navigation?

2D Lidar scans a single horizontal plane, returning X and Y coordinates. It is excellent for indoor mapping and safety stopping but cannot see obstacles below or above that scan line (like a table top). 3D Lidar adds the Z-axis, creating a volumetric cloud that allows the robot to understand the shape of objects and navigate complex, unstructured environments.

Can I use a navigation Lidar for safety functions?

Generally, no. Safety Lidars are certified to specific standards (ISO 13849 PL-d, Category 3) and have redundant internal architecture to fail-safe. Standard navigation Lidars provide raw data for mapping but do not have the certified reliability required to protect human life in an industrial setting.

How does Lidar handle glass and mirrors?

Transparent surfaces (glass) and specular surfaces (mirrors) are the Achilles' heel of Lidar. Light may pass through glass or bounce off a mirror, creating "ghost" obstacles or missing the wall entirely. This is typically mitigated by sensor fusion (adding ultrasonic sensors) or software filters that identify and mask these areas in the map.

What is the advantage of Solid-State Lidar over Mechanical Lidar?

Mechanical Lidars spin physically, which introduces wear on motors and bearings, making them susceptible to failure in high-vibration AGV environments. Solid-state Lidars have no moving parts, offering significantly higher durability and longevity, though often at the cost of a narrower Field of View (less than 360°).

How does environmental dust or fog affect Lidar performance?

Particulates can reflect laser light, causing false obstacle detection. High-quality industrial Lidars use "multi-echo" technology, where they evaluate multiple returns from a single pulse. This allows the sensor to ignore the weak early reflection from dust and register the stronger reflection from the actual wall behind it.

What is the typical range required for warehouse AGVs?

For indoor warehousing, a range of 20 to 30 meters is usually sufficient for localization (SLAM). However, for high-speed forklifts, the safety field detection range depends on the stopping distance. A robot moving at 2 m/s needs a long enough look-ahead distance to decelerate safely, often requiring sensors with at least 10-15 meters of reliable safety-rated range.

Why is angular resolution important?

Angular resolution determines the density of points at a distance. If an AGV needs to detect a thin table leg at 10 meters, a coarse resolution (e.g., 1.0°) might miss the object entirely between beams. A finer resolution (e.g., 0.1° or 0.25°) ensures small obstacles are detected reliably at range.

How do multiple robots prevent Lidar crosstalk/interference?

When multiple robots operate in the same area, one robot's laser can blind another. Manufacturers mitigate this using code-division technology (encoding pulses), varying frequencies slightly, or using software filters that reject pulses that don't match the expected timing of the emitter's own sequence.

Does 3D Lidar require more processing power than 2D?

Yes, significantly more. A 2D scan might contain a few hundred points per rotation, while a 3D scan can generate hundreds of thousands of points per second. Processing this data (segmentation, ground removal, object classification) requires powerful onboard computers (GPUs or FPGAs), which impacts the robot's battery life and BOM cost.

What is extrinsic vs. intrinsic calibration?

Intrinsic calibration refers to the internal accuracy of the sensor itself (done by the manufacturer). Extrinsic calibration is the process of defining exactly where the Lidar is mounted relative to the robot's center (base_link). If the extrinsic calibration is off by even a degree or centimeter, the map generated will be distorted and navigation will fail.

Can cameras replace Lidar completely?

Visual SLAM (vSLAM) using stereo cameras is a competitor, but Lidar still holds advantages in lighting independence (working in total darkness) and direct, high-precision distance measurement without the heavy computational load of calculating depth from stereo disparity. Most advanced robots use a fusion of both.

What is the cost difference between 2D and 3D implementations?

2D Lidars are a mature commodity; basic navigation units can cost under $100, with safety-rated units in the $1,000 range. 3D Lidars, while dropping in price, typically range from $2,000 to $10,000+ depending on channel count and range, making them a premium choice reserved for robots that absolutely require 3D perception.

Ready to implement 2D and 3D Lidar in your fleet?

Explore Our Robots