Friday, March 9, 2007

Monte Carlo Localization!

In robotics, localization is the task of determining one's location within a known environment, given available sensory data. Theoretically, if a robot knows its starting position and how many times its wheels have rotated, it should be able to calculate its resulting position on a map. Unfortuneately, due to real-world factors like wheel slippage, there is a significant amount of uncertainty in this odometric data, so a robot needs to gather and reason about other sensory information in order to keep better track of its own position.

With HamsterBot, we implemented an algorithm called Monte Carlo Localization, or MCL for short, which lets the robot figure out where it is on a map.

You can read the original paper on MCL at this page, or a short description of it at Wikipedia. The general idea of it is to first guess where the robot is. Then when the robot moves, we take each of these guesses, called particles, and move them through the map in roughly the same way that the odometric data indicates, but with small random variations for each particle. Next we compare the current sensor data from the robot to simulated sensor data for each particle, and assign a probability to the particle based on how close the two sensor readings are. See the post on Motion and Sensing Models for more information on how positions and probabilities were assigned. Finally we resample the particles based on their assigned probabilities, so that high probability particles tend to be multiplied and low probability particles tend to die off. Then the process starts over. In this way, we can get what we hope is a good idea of where the robot is within our map.

Currently, Hamsterbot uses just its bump sensor. In the videos of our demo, you can see the robot bumping into walls, and on the screen, all the yellow particles not touching a wall dissapear, because we know they aren't accurate representations, given the bump we just experienced.

Our MCL algorithm in particular has some cool features to keep the robot on track. Should none of our particles correspond to the sensor data from our robot, then we create all new particles, and spread them across the map, hoping one of them will be close to the actual robot. In the future, we are going to also make sure these all new particles match the current sensor data from the robot...if the robot is bumping something, there's no point in creating particles far away from any walls!

This particular algorithm is something that will likely have many upgrades in the near future, but for now, it's the brains of our Hamsterbot!

Check out Video #0 to see the real HamsterBot and a good example of MCL in action. Below is a movie of a simulated Roomba running MCL. The red dot indicates the actual robot position, the blue dot is the odometric data, and the yellow dots are our MCL particles.

MCL Motion and Sensing Models

The first step in MCL involves updating particle positions with respect to our motion model:
Given the current and previous poses reported by odometric data, we extract the distance and angle at which the robot should have moved. We then scale the distance by 1 ± p for some percent error p. The angles are randomized within a constant tolerance, c, of their values. By trial and error we found that values of 50% and .5 degrees, for p and c respectively, works quite well for the simulated roomba with a simulated noise factor of 2.5%.

The next step involves comparing robot sensory data with what each particle would sense from its pose in the given map. We used the following cases for assigning probabilities:

1. Robot bumps left and right
---> If the particle bumps left and right, it gets our maximum probability value: 0.9
---> If the particle bumps only on one side, we find the minimum angle, a, that it would need to rotate in order to register both left and right bumps, and assign a probability of (1-a/pi)*0.9.
---> If the particle is not touching any walls, we take a range reading, d, from its pose, and assign a probability of (1/(d+1))*0.9

2. Robot bumps on either left or right, but not both
---> If the particle bumps on the same side and not both, it gets probability 0.9
---> If the particle bumps on both sides, it gets 0.75*0.9
---> If the particle bumps on the opposite side, it gets 0.5*0.9
---> If the particle is not touching any walls, we take a range reading, d, from its position, at a 45 degree angle left or right, as appropriate, of its heading and assign a probability of (1/(d+1))*0.9

3. Robot is not bumping
---> If the particle is not bumping, it gets 0.9
---> If it is bumping, it gets 0.9 / N, where N is the number of particles.

Tuesday, March 6, 2007

HamsterBot Videos

See HamsterBot 1.0 being steered by mouse and hamsterball controls, and running MCL!
Video #0
Video #1
Video #2

HamsterBot

HamsterBot 1.0 was successfully demoed today. This post documents the details of our implementation...

Physical Design
Our final robot consists of a hamsterball, a roomba, a styrofoam ring, and a little duct tape. The ring is wrapped in teflon tape and mounted on two "feet" (empty teflon tape containers) which are then duct taped on either side of the cargo bay of the roomba. A motion sensor is then placed in the cargo bay (stabilized by some foam bedding) and the hamsterball is set inside the ring, such that its bottom surface rests on the sensor. The ring and sensor are sufficiently smooth to allow the ball to rotate freely in place in any direction.

Mouse Motion Sensing
Inspired by the video of iRobot's hamsterbot, we chose a wireless optical mouse as our motion sensor. The pyRoomba software used for various class assignments served as a basis for our program. The graphics code included therein utilized the Tkinter module and included functions for determining the cursor position (as long as it is over the canvas). I extended this code to include data members for the cursor position and to update these data members whenever mouse motion was detected.

Our Program
I then added a "Mouse Mode" to the pyRoomba main program where the cursor position is checked at regular intervals. If a change in position occurs, the robot's linear and angular velocities are set to be proportional to the vertical and horizontal displacements, respectively, between the starting and ending points of the movement. Thus, moving the cursor upward via the mouse causes forward motion, while moving it to the left causes the robot to turn toward the left, etc....
This motion model is basically what one would expect in order to steer the roomba with a mouse as one would normally use the mouse (that is, on the table, with the sensor facing down). Within Mouse Mode, we can also toggle on and off "Hamster Mode". Since the mouse sits upside down, with the hamsterball "rolling" in place over the sensor, this switch simply flips the sign of the angular velocity used. i.e., If it looks like the mouse is moving left under hamsterball control, the ball must actually be rolling toward the right, so we turn towards the right.

The ctypes python module provides a function that will set the position of the cursor to any point on the screen, which ensured that we would not "run out of screen" as described in the previous post, since we can just place the cursor back in the center of the screen after each motion sampling.

Future Work
There are still a couple issues that could be improved upon with this system. The first is that the hardware could be nicer. The styrofoam ring is not very durable and doesn't always keep the ball centered over the mouse sensor. It also allows the ball (and thus, the cursor) to wobble back a forth slightly due to changes in velocity, which is a kind of motion that it would be nice to ignore, if not eliminate.
Secondly, the robot's motions resulting from mouse movements are somewhat jerky. Part of this is due to the ball wobbling in the ring as described above, but a human hand can create the same effect. If the mouse or ball is moved quickly forward and back, the robot's linear velocity will change direction so abruptly that it will pop a wheely! This looks cool to an observer, but would probably not be very fun for a hamster. Thus, our motions could definitely use some smoothing. Forcing each sampled motion to run for a minimum time, or causing changes in velocity to occur more gradually are two possible approaches to this end.