An article published in the November 2015 edition of Artificial Intelligence Review defines visual simultaneous localization and mapping, more commonly referred to as Visual SLAM (VSLAM), as a means of establishing the position of an autonomous mobile agent (an object, system, robot, or vehicle) by using images of the environment. Simply put, VSLAM is one way how many robots are able to perceive the operating environment or “see” today and is proving to be a cost-effective and strong alternative to other positioning technologies like those relying on LiDAR. Visual SLAM systems are being used more and more in the world of automated robotic vehicles and may incorporate additional methods to determine positions and to navigate known and unfamiliar environments successfully. 

What Is Visual SLAM?

Visual simultaneous localization and mapping systems are designed to provide sole or secondary support for robotic navigation requirements by allowing robot operating systems to perceive their surroundings and their positions. When combined with GPS systems, these systems offer added help in navigating areas where GPS may not be available.

In general, visual SLAM does not refer to a specific algorithm or software platform. Instead, it refers specifically to the visual process used to map the environment and to determine the orientation and position of a sensor in relation to the map. This information is sent to the control system, sometimes via a middle-ware called robotics operating system (ROS) talked about below, where the decisions to move or change position are made.

While Visual SLAM can offer some great navigational advantages, it presents some significant challenges for companies interested in navigating their robots in uncharted remote or underground areas. Finding ways to maintain navigation capabilities even in these areas is critical to the quality of VSLAM implementations. Autonomous systems depend on the capabilities of simultaneous localization and mapping to function properly in GPS-denied landscapes.

vSLAM vs (LiDAR) SLAM

Light detection and ranging, better known as LiDAR, is a form of laser scanning that has been used in traffic control applications and to provide positioning information for robot navigation systems. One of the first uses of LIDAR was for remote sensing, creating digital terrain elevation data from surveying aircraft and spacecraft. LiDAR emits light and then captures that back for distance mapping, similar to how traditional RADAR systems work. Some SLAM solutions use LIdar such as Google’s Cartographer.  VSLAM works passively, depending on available light and relies on the ability to re-recognize places from images similar to human situated cognition. For now, vSLAM offers a more cost-effective solution than LiDAR; relatively cheap sensors such as inertial measurement units and wheel odometers are used in conjunction with cost-effective cameras enabling simultaneous localization and mapping systems to generate accurate state estimates in order to provide guidance for navigation systems without the large expense of LiDAR sensors. These camera modules are typically simple RGB (red, green, blue) cameras that one might find on a camera phone. Cost and ease of use have made vSLAM systems much more popular than comparable LiDAR systems, especially in recent years.

In larger commercial , industrial and safety-critical robotics applications LiDAR is a viable solution, as the price of the robotic vehicle can absorb the higher component cost of LiDAR. Notwithstanding the recent innovations like solid-state LiDAR, combined with the cost curve dropping rapidly, see these technologies may become more complementary than competitive. LiDAR-based SLAM has also many challenges such as being more susceptible to “scene aliasing” where the robot mistakes the environment for another. We see the LiDAR – as it becomes cheaper – more as a complementary sensor making SLAM more robust as well as an additional safety sensor in the domain of collision avoidance.

 

How Companies Are Implementing Robotic Navigation

Large consumer-based companies like Tesla, Husqvarna, and iRobot have been instrumental in creating tools and systems for self-driving cars, home cleaning and landscaping activities. However, large-scale landscaping, precision agriculture, and warehouse automation are eagerly anticipating this technology to enable their mobile machines to navigate autonomously.

  • Stefan Ericson showed how the SLAM approach could be used to improve the robustness and reliability of autonomous field robots in his 2017 dissertation.
  • ASI Mining has applied SLAM to their autonomous mining robots in Ukraine.
  •  Tesla is currently producing vehicles with self-driving capabilities that will make roadways safer now and in the future. The company is developing a computer vision solution similar to VSLAM that allow vehicles to navigate unfamiliar environments in a practical and scalable manner.
  • Husqvarna manufactures lawn mowing systems that offer added convenience and perimeter control for managing landscaping requirements in the commercial and large-scale residential environments.
  • The Inertial Sense LUNA Platform makes it easy for companies to incorporate autonomous navigation into their existing systems or into products currently in development. This advanced system makes it easier for your company to integrate autonomous navigation for your devices and robotics systems.

Inertial Sense can help you create a workable system that addresses all elements required to make your autonomous navigation project a success.

What Is Robot Operating System?

Robot Operating System, commonly known as ROS, is not a true operating system in the traditional sense. Instead, it is a collection of open-source tools and collaborative programming libraries that can be integrated into your autonomous navigation systems to provide added help in implementing these features and functionalities for your project. Inertial Sense is working with the tools included in ROS to create true plug-and-play functionality and interoperability that will revolutionize the field of autonomous navigation and robotics in real-world situations.

Putting It All Together

Inertial Sense offers an integrated solution for your robotic navigation requirements that incorporate ROS, Visual SLAM and practical solutions for navigating areas without being dependent on GPS information. We create inertial navigation systems that provide the functionality that you need to achieve your goals and to achieve practical results for managing tasks that take place in dirty or dangerous environments.

To learn more about how Inertial Sense can help you create the best solutions for your devices and robotics installations, give us a call today at 801-855-6632. We combine precision sensors, robotics and the right software to create the ideal inertial navigation system for your company’s current and future requirements. Our team is ready to help you create the perfect solution for your robotics projects and autonomous navigation needs.