Programming robots is not a new science. But having the right programming criteria can make a difference in your autonomous mobile robotics’ performance. There are three major components you should be familiar with for autonomous robot navigation. Watch the video below as Tom unpacks the three general rules for how autonomous robots navigate. 

3 Major Components for How Autonomous Robots Navigate:

  1. The core of the navigation systems starts with “where am I”. Having the ability to get pinpoint localization on where your robotics are in space is the first general step.
  2. The next step is creating the environment. This focuses on things like the cameras, the visual slam (vSlam) and lidar. Having a visual environment or map for the system to operate from is key.
  3. The third part is then determining what is the path within that environment. This can consist of obstacles to avoid, pivotal turning moments, and what is required to do to get back home. These things help with the robot completing its mission. Remember, without a map to guide the robot, it has no direction.

We have videos on how this works in conjunction with our luna platform, both in terms of a conceptual animation as well as a visual deployment. Be sure to click on the URL below to find out more information. We also have a real robotic device on our website that you can view. This will give you an up close visual of a robotic device.

 

Learn More:

What Does The Autonomous Robotic Landscape Look Like?

Should I Get an Inertial Sensor Development Kit or a Module?

Precise Inertial Navigation Through Sensor Fusion

 

Video Transcription:

Steps to Navigating The Autonomous Robot Brain

Ok, well there’s really three components to how an autonomous robot navigates. At the core of it is, where am I? So you can’t figure out where to go if you don’t know where you are. So having the ability to get pinpoint localization on where you are in space is kind of the first step.

The next step is, what does my environment look like? And that’s where things like the cameras and the visual slam and lidar come into play, in terms of sort of building that map and the robot’s mind of what their environment looks like.

The third part is then determining what is my path within that environment, and what obstacles do I need to avoid, where do I need to turn, what do I need to do to get back home? All those things rolled into a complete mission for the robot.

Just To Recap

So just to summarize, it’s where am I, what does my environment look like, where do I need to go, and then how do I complete the mission and get back home.

And we’ve got an entire video on how that works with our luna platform both in terms of a conceptual animation as well as a visual deployment of a real robotic device on our website. So click on the URL if you want to find out more about that.