Wednesday, March 4, 2009

Maze identification

This is starting to get really overfitted, but since there are only 6 maps in the current state of the competition, we can use the information previously gathered from the vision and distance sensors to make our initially blank mapping data a lot more colourful! There is no limit on starting positions but most only have 4..



In the case that there aren't maps (later competitions) then I am currently investigating whether going to the center of the most unexplored area is more efficient than going to the nearest unexplored square.

Maps:

Wall Vision

The distance sensing by vision idea turned out to be awesome. We can get various ammounts of information and it is outlined in the figure below. I measured distances upto 5 squares away from the wall.
Process:
1) first took a vertical scanline from the centre; this determines how far away the wall is from the target and how much other info we can determine (otherwise the wall will block it out)
2) then there are two walls adjacent to the one directly ahead we can always get some info about. if they have a closer height than the main then we have walls on the sides coming up, if they are same then they are on the same distance as before, and if they are less then there is an opening.
3) at the furthest distances there is upto 5 bits of information instead of 3, this is a little small but you can still read some data from it, and do the same as in 2 to determine more positions.
ie. the blue arrows on the pic below demonstrate how 2/3 can show if there is an gap, a side wall, or the wall at the same level as infront.


Wednesday, February 18, 2009

Dodgey distance sensors + Dodgey encoders = bad bad map

The distance sensors are even worse than we already discovered. I finally got the encoders to work correctly such that the robot sits almost in the middle of the square at every move, and have been frustrated trying to find out why its mapping still has occasional bugs. Then I discovered that the distance sensors aren't very reliable, infact, it is a coupling of both the wheel encoders and the distance sensors together that causes the problem. The robot doesn't stop close enough to the centre to 'always' get an accurate reading, and the distance sensors fluctuate too greatly. So i'm exploring slightly dif
Solutions:
One method one of the other contestants had been doing was to walk diagonally, which I had thought was rather funny looking until now, but finally understand why. They walk straight into the wall until it bumps so that it knows its there, and it guesses the structure of the path. This means that they know where every 2nd wall on each side is, but they don't know the gaps for sure.


So now i'm trying to get a vision based-wall sensor to work instead. I'm splitting the bottom half of the image into 4 sections and counting the number of dark pixels (non-grey, high black count). For example the following picture shows a wall 2 squares ahead and a wall 1 square ahead

Monday, February 9, 2009

Hunting for blobs

So, whilst my mapping program is still not bugfree, i've been doing a bit of the CV/feeder detection stuff that was scheduled for this week in my proposal.

Basically the camera output from the ePuck robots is tiny (52x39), but this also makes it perfect for being able to do really simple CV on without taking up too much. At the moment i'm using the idea the sample code actually used: they basically just scanned every pixel for bright-red colours (greater than 248 out of 256(, and if there was a significant ammount (they used 2) it would recognize it as a feeder. This was probably the most intelligent part of the sample code, and since it was purely a random walker with fluid motion it just tried to turn so that the biggest blob of red was in its center.



I have currently tweaked it so that it would suit mine more, since my robot walks in a square fashion only. At first I thought I could just match a feeder's image to about 5 pixels and then i realised the reason they used only '2' is because the feeder wouldn't register as more than 2 pixels if it was *really* far away. That and besides the landmarks, everything else is white, so there is little interference. There is a red landmark type, but it is considerably darker than the feeder. Never the less, at long range, a feeder can look almost as dark due to pixelation...

Monday, February 2, 2009

Motor Control for Dummies

This is the second of the posts I mentioned before, please forgive the pictures- I drew it myself =(

The rats life ePuck robots, even though they are simulated, are controlled just like real robots in real life, so we have to control each wheel individually rather than give commands like moveForwardOneBlock. So a very simple intro to motor control: (I think it should be self explanatory, forward & reverse mean power given to the wheels)

(more post after the picture)


Ofcourse the robot is a *little* more complicated than that, such that you can adjust the speed and that you have wheel encoders measuring the position of wheels. The picture below contains the sensor data, the speed/power given to each wheel are the two red numbers in the middle, (ignore all the numbers on the side, they are the IR based distance sensors). The sample code seems to imply that the maximum speed is 300 but I have been unable to find the exact value in the reference material, and it is unclear elsewhere as i've seen references to a speed of 1000. In practice the battery level which is supposed to be linked to the motor speed doesn't seem to decrease any faster at 1000 but the robot *does* travel faster. This is a bug I think, or at least it is one on client side (there was talk of the competition environment locking certain variables..).

(wheel encoders and slippage is after the pic, keep reading)


The green number beneath the red ones is the wheel encoder data. It measures how far forward or back the wheel has gone since it started (or when you last told it to reset). This is very useful if we were to use the dead reckoning method, especially since the manual claimed that the wheels don't slip. However in practice I have found that applying a power of 300 until the encoder reaches 1000 on each wheel, will leave the robot in a different place when nothing was blocking it compared to when something blocked it temporarily. This implies that slippage is there to some extent, and thus makes relying on the encoders harder if you try to use dead reckoning.
(Note: can be solved with obstacle detection but still..).

Also, forgot to mention in the last post that I had chosen to continue with the grid/square-movement method as opposed to fluid with dead reckoning. This discovery with slippage further supported that choice.


Mapping: Still has problems, hopefully will be fixed by later in the week and I will have a nice video to show you!

Update: Apparently a book titled 'Braitenberg vehicles' is a must-read with reference to this stuff, although the closest I could find is this, the quicker explanation is at wiki: http://en.wikipedia.org/wiki/Braitenberg_vehicles

To dead or not to dead?

Sorry for the lack of posts last week or two folks, just been working silently for the most part but to make up for it I shall be releasing 3 today (well the 3rd is pending whether my mapping algorithm will output successfully..still fixing that part).

The first post is from the discussion last week: To travel in square blocks or to travel naturally without grids and use dead-reckoning. To illustrate what I mean better the picture below shows traveling in blocks (top) and grid-less (below). NOTE: More post after the jump so keep reading!



Using the method up the top it is much easier to map programming-wise; can just throw it all into an array and viola you have a map (well a bit harder than that, but thats for another post). For the bottom method, I would have to use a technique called Dead Reckoning. It has actually been around for a very long time- at least as old as the sixteenth century, as sailors and the like would use it to tell their position. It was later used in air navigation also, not so much these days but the inertial navigation systems in most aircraft rely on it as a base. Although this isn't their primary method of navigation, it is used as support particularly when in harsh conditions where GPS and the like aren't usable. Also in computer games to predict the position of models whilst still waiting for the server to send updates (smoothing out visible lag).
The origin of the name is debated but it either stems from 'deduced reckoning' or is related to mapping without stars/landmarks ('live') and is hence 'dead'.

More Info:
http://en.wikipedia.org/wiki/Dead_reckoning

Wednesday, January 21, 2009

Pinky and the brain

Introducing my two test rats-



which.. then killed the maze window (somehow).