Simultaneous Localization & Mapping
Like humans, robots and machines need the help of maps to discover and navigate their surroundings. This becomes even more important when the surrounding environment is underground, indoors and in areas where GPS (Global Positioning System) signal is not available.
The power of SLAM
Simultaneous Localization and Mapping (SLAM) is a process whereby a robot or machine – in the form of an unmanned aerial or ground vehicle (UAV/UGV) – builds a map of its environment while simultaneously computing its own location within that map. SLAM is highly effective in mapping out unknown environments and is a valuable data tool for engineers in path planning and obstacle avoidance beyond a visual line of sight (BVLOS).
SLAM delivers complex computations and algorithms to accurately construct or refine a map of an unknown environment, all while tracking the vehicle’s location in real time. Commonly, scanning sensors are used to continuously collect data of an environment every few seconds, with fresh data compared and added on to the previously captured data. This alignment of features creates a highly accurate point cloud for data analysts.
Origins of SLAM
Alongside the rapid growth of machines and robotics, SLAM technology originated in the 1980s. Back then it soon became apparent that robots and machines needed help getting around, navigating work environments and avoiding collisions.
SLAM in action
In describing SLAM technology in robotic lawn mowers, for instance, SLAM allows the mower to number its wheel revolutions to determine the movement required for the job. This would characterize the localization element of SLAM. Simultaneously, using sensors, the lawn mower can also create a map of obstacles in the surrounding environment, assisting the mower to avoid mowing the same area of lawn twice. This describes the mapping element of SLAM.
What makes SLAM so powerful is its ability to map and calculate complicated algorithms in a GPS-denied environment, such as underground or indoors. What’s also great about SLAM is its ability to grow alongside technology, where faster Internet connection and computer processing as well as the availability of low-cost sensors are fast becoming industry norm.
Sensors for SLAM
There are two front-end processing methods for SLAM, namely LiDAR SLAM and Visual SLAM. Let’s explore each:
- Light Detection and Ranging (LiDAR) SLAM primarily uses a laser sensor for precise distance measurement. Compared to many cameras and other types of sensors, LiDAR sensors use the Time of Flight (ToF) principle to accurately measure the time it takes for an emitted beam of light to reflect off surfaces in the surrounding environment and return to the sensor.Being more precise, LiDAR sensors are highly effective for applications involving high speed vehicles like drones and autonomous cars. The beauty of LiDAR technology lies in its ability to produce high-precision point cloud measurements, essential for SLAM map construction. Matching the data point clouds indicate the direct path followed by the vehicle (localizing) while helping analysts track and visualize the exact distance travelled by the vehicle within the environment (mapping).
- Visual SLAM (vSLAM) makes use of a single camera to collect data points and create a map. Although they are usually the cheaper option and can provide large sets of information, more than one camera (or a more costly 3D camera) is required to define depth of the environment. This not only adds to the cost of the job, but also the on-board weight of the SLAM-performing vehicle.Another challenge is that since vSLAM operates in real-time and with a single camera, providing a full 360° view can be difficult. This challenge is resolved through a system called CoSLAM, which uses numerous cameras for visual SLAM but comes with extra-processing power requirements. In addition, reflective surfaces and changes in light can confuse and hamper vSLAM vehicle maneuverability.
Fusion of technologies
A rising industry trend is to maximize the advantages of both LiDAR SLAM and Visual SLAM. Without selecting one and missing out on the other, companies are starting to integrate LiDAR sensors with 360° panoramic cameras. This fusion of technologies allows engineers to collect accurate point cloud data (from the LiDAR sensor) as well as detailed visualizations (from the camera) of the target site. Practically, LiDAR’s point cloud data can be brought to life using the visual capability of the camera.
SLAM Matters: Applications & Benefits
- Warehouse automation: Today’s industrial robots are powered to optimize work systems and processes on the factory floor. SLAM enables these robots to map their immediate environment, react (in real time) to unexpected situations, navigate safely as well as avoid oncoming or stationary obstacles. Direct benefits from SLAM for warehouse automation include reduced inventory counting time, increased production and cost-cutting.
- Mining: Innovative mining operators are leapfrogging ahead of the competition through SLAM technology, to scan hard-to-reach terrain and produce high quality visual data of the environment. Common applications of SLAM for mining are to measure volumes, generate valuable insight into rock and mineral formation, as well as to discover and map new tunnels or drifts. Direct benefits from SLAM for mining include improved worker safety, maximized ore / mineral extraction and cost-cutting.
- Autonomous vehicles: The increased emphasis on autonomous vehicles has put the spotlight on how sensor technology can be part of the big solution. Since GPS is satellite and location dependent (i.e., urban, tall buildings, open sky), it may not be the only reliable answer for driving the autonomous vehicle trend. SLAM technology can complement GPS to accurately map an environment and provide reliable obstacle avoidance in real time.
Essential Read: LiDAR – Making Strides in the Laser Revolution
LightWare engineers solve SLAM challenges
LightWare’s expert team of engineers having over forty years’ experience in the design and manufacture of LiDAR and laser rangefinders. Our management and staff are dedicated to bringing quality products to market at an affordable price.
LightWare produces microLiDAR™ sensors that use beams of laser light to measure distance, height, volume, and relative positioning. This provides essential sense and perception capability for applications in the world of Autonomy of Things (AoT) and the Internet of Things (IoT).
The many known LiDAR applications include robotics, small unmanned aircraft (UAVs) and ground craft (UGVs), self-driving vehicles, SLAM, distance measurement, level measurement and obstacle detection.
Partner with us
Contact us on info@lightwarelidar.com and we’ll partner to achieve your LiDAR objectives.
microLiDAR™ end of range sale alert:
Get your SF22/C microLiDAR™ sensor today for only $149! Only 10 remain and stock is clearing fast.