Monday, February 9, 2009

Hunting for blobs

So, whilst my mapping program is still not bugfree, i've been doing a bit of the CV/feeder detection stuff that was scheduled for this week in my proposal.

Basically the camera output from the ePuck robots is tiny (52x39), but this also makes it perfect for being able to do really simple CV on without taking up too much. At the moment i'm using the idea the sample code actually used: they basically just scanned every pixel for bright-red colours (greater than 248 out of 256(, and if there was a significant ammount (they used 2) it would recognize it as a feeder. This was probably the most intelligent part of the sample code, and since it was purely a random walker with fluid motion it just tried to turn so that the biggest blob of red was in its center.



I have currently tweaked it so that it would suit mine more, since my robot walks in a square fashion only. At first I thought I could just match a feeder's image to about 5 pixels and then i realised the reason they used only '2' is because the feeder wouldn't register as more than 2 pixels if it was *really* far away. That and besides the landmarks, everything else is white, so there is little interference. There is a red landmark type, but it is considerably darker than the feeder. Never the less, at long range, a feeder can look almost as dark due to pixelation...

0 Comments:

Post a Comment

<< Home