Ros, Propeller and Kinect!

I have been very busy over the past few months building up a robot to make use of the Robot Operating System (ROS)

Willow Garage has their own ready to go robot called the TurtleBot which is a very good system, and has amazing abilities right out of the box.

However I’ve already been planning my robot version for a while and ROS is more of an addon, albeit a huge one, then a starting point. While the TurtleBot has a lot of features that I may never be able to come up with myself, it is missing a couple of key ones:

  1. It has a very limited ground clearance.
  2. The payload is quite limited.

What I’ve been looking at is the Arlo Robot from Parallax:

This is by no means a “pull it out of the box and go” platform, but it is well within my skills. No designing circuits, no laser cutting, no hacking together of bits of metal and plywood. It can be built by anyone who can put together a radio controlled car kit, and their site is basically designed around teaching kids how to program their controller board, so the learning curve is well assisted.

Fortunately the Arlo Robot is also similar in shape and layout to the TurtleBot. Once I built code for the Propeller based Activity Board to talk to ROS, I can actually run a lot of the ROS TurtleBot code with very little modification!

Here is a list of my progress so far:

  1. I have written code to communicate between the Propeller based Activity Board and ROS, sending odometry information and accepting twist messages from ROS.
  2. I have adapted the 3D view, robot model and teleop code from the ROS TurtleBot to run on my robot.
  3. I have built a basic URDF model of the Arlo robot.

I am doing all of this on a little ActivityBot, because it uses the exact same controller as the ArloBot. Right now I am just waiting on parts from Parallax to get my ArloBot together so that I can start running ROS with it.

I have not made SLAM or “gmapping” work well yet because the odometry from the ActivityBot is very poor due to the fact that its little rubber clad wheels slip very easily and very unevenly on all floor surfaces. I expect the ArloBot to have much less wheel slip. I will also be experimenting with adding a gyro to the Propeller Activity Board to assist in verifying rotation angles.

Since a photo is worth a thousand words, here is a picture of where I’m at at the moment:

Robot Progress July 17, 2014
Robot Progress July 17, 2014

Here you can see my rudimentary ArloBot URDF with a depth registered 3D image from the ASUS Xtion Live and the simulated red laser scan line. Also notice the small line of white dots in front of the robot. That is a simulated LaserScan built from the PING sensor on the front of the ActivityBot. I plan to place PING (Ultrasonic) and InfraRed sensors around the ArloBot and use simulated LaserScans from them to help the robot navigate around low or close obstacles that the 3D Camera misses.

I have a lot of documentation on my process and my code is all online, so if anyone is interested in more about this please let me know and I will post some more background as well as on going work.

Published by


He's just this guy, you know? Chris aka ChrisL8 aka the Hoopy Frood aka the Ekpyrotic Frood is a full time Unix Admin who likes to play with everything technology on the side. I mostly build robots here, but I also dabble in web development.

4 thoughts on “Ros, Propeller and Kinect!”

  1. Hey, I’m really interested in your documentation and the process as I was hoping to use the arlo robot kit with a kinect eventually and I would love to see how you got the propeller activity board working with ROS.

    1. Hi Ryan! Thank you for the comment. I have added some more posts and links to my Github repository, so please check those out. I also encourage you to check out the Forum at, where there are a lot of great people posting about their projects and helping each other out.

    1. IFFI,

      What I did was create an interface to allow the use of Robot Operating System (ROS) with the ActivityBot.
      All of my work is here:

      However it requires attaching a computer to the ActivityBot, so it is not very mobile. It was just for testing before I built an ArloBot.

      Here is what I found about SLAM in my research:

      1. There are a lot more papers about the theory than actual implementations.
      2. The implementations that do exist are:
      a. Very CPU intensive
      b. Require high resolution and high frequency data

      The reason I got interested in ROS was because they have packaged up SLAM implementations that run on a PC.

      However, these ROS implementations of SLAM are far too CPU intensive for a Propeller board, or even a Raspberry Pi. They struggle on a low end PC.
      Also, they need either an expensive sensor ($1,5000 and up) or some “fake laser scan” data from a Kinect or ASUS Xtion, both of which are obtainable, but a little bulky to mount on the ActivityBot.

      So the short answer is that all of the currently available implementations of SLAM that I have found will not work with a Propeller chip. They need a PC with a modern CPU.

      The long answer is that the field is wide open and people are writing new things every day. However, my understanding of SLAM is that it is a sort of statistical analysis algorithm. Statistical analysis with large data sets is notoriously heavy on CPU and RAM.
      Anything is possible but it depends on your personal skills and what you like to do.
      Personally I enjoy using various components together such that each does what it is best at, or at least what I am best at using it for. 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.