What is visual odometry and how does visual odometry work? Funny enough, it uses more than just vision! Watch the video below as Chase tells us how visual odometry works and how it relates to visual slam.

Visual odometry uses a camera feed to dictate how your autonomous vehicle or device moves through space. Each camera frame uses visual odometry to look at key points in the frame. From there, it is able to tell you if your device or vehicle moved forward or backward, or left and right. It adds this information to any other type of odometry being used, such as wheel odometry. Wheel odometry tells how quickly your wheels are turning and at what rates to tell if you are moving forward or turning.

More and more off-the-shelf products are appearing in the market. It is important to have someone developing vSLAM because it is still vastly under-researched. The last thing you want is a problem integrating with your system.

If you have someone on your team who can work with visual SLAM or if you have a product that can work with it, it can save countless hours of research, development, and prototyping.

If you want to learn more about how visual odometry works or anything else, click here and we’ll get in touch with you!

 

Learn More:

Can Visual SLAM Be Used Without GPS?

Flavors of SLAM: vSLAM vs. LIDAR

What is vSLAM Used For?

 

Video Transcript

What is visual odometry and how does it work?

Basically, visual odometry uses a camera feed to figure out how you’re moving through space. From camera frame to camera frame, visual odometry looks at key points in the frame and is able to tell if you move forward or backward, or left and right. It adds that information to whatever other type of odometry that you have, such as wheel odometry, which uses how quickly your wheels are turning and at what rates to tell if you’re moving forward or turning.

Visual odometry is similar to how your wheels spin, able to tell you how you’re moving just by what you see.

Do you need someone to be developing vSLAM for you in-house or are there other off-the-shelf options? The answer is, there are some products or some open source packages that you can use. A few that I know of, Google’s cartographer is a really good one that integrates lidar for SLAM. There’s also one called Hector SLAM. and then for vSLAM, one that I’m familiar with is the Vins Mono Algorithm, and I’m not really sure what the licensing is on those or really how capable they are.

Really, as time goes on, more and more of those off-the-shelf products are appearing. But it’s important to have someone who’s developing vSLAM because it’s still something that’s largely under-researched and sometimes can be complicated to integrate with your system.

If you have someone on your team who can work with visual SLAM or if you have a product that can do it, it can save countless hours of research and development and prototyping, because it’s pretty complicated.

If you want to learn more about visual SLAM or anything else we’ve talked about, please click the link below and we’ll get in touch with you.