NXT-Spy – Let’s put the all the image processing processes together

To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.

So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.

The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.

The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.

After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.

And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).

After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels

So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.

Supervisor’s comment:

OK so the architecture is designed and coding is progressing well
Comment:  Its really easy because I am using processes to undertake the actions!

NXT-Spy – Flag system

For the robot to recognize the drop zone (the place where it will drop the balls at), I thought about putting little flags 0n them with a black and white shape (to avoid any interference with the colored balls recognition).

My first idea was to work with 2D bar codes : a widely used open-source Java library called ZXing [source]. It can decode these two 2D bar code formats:

  • QR Code
  • Data Matrix

So I tried to encode the letter B and the letter R (for blue and red) in each of these formats.

QR code :

Data matrix:

So these are the most simple 2D code I could generate and I am not really happy of the size and complexity of those. Even the most simple of them (the data matrix) is generating an image of a complexity of 10×10.

So I am still trying to find a simpler 2D decodable symbol.

Supervisor’s comment:

Demonstrated that the QR matrix is probably the best solution because there are Java classes that detect it!
Arnaud was worried about size of flag relative to space balls working in.  I suggested a space of no more than 1.5m square, which will be feasible.  This will require quite large flags, a vertical phone perhaps lower down on the robot.
We also discussed his process architecture for ball recognition and robot control.  It has an implicit deadlock due to its network BUT he can argue that it will not deadlock by presenting the appropriate case and the order in which inputs to the processes happen.
As usual he is making really good progress; very well done
The task for next week is to see how well the system can detect QR codes, at what distance and size and orientation of flag.

NXT-Spy – Let’s analyse the images!

Now that we have a fully functioning remote controllable robot with a live image from its embedded camera, we can start thinking about the next step of the project: allowing the robot to automatically recognize colored balls, go to grab them, and finally bringing them to a zone corresponding to their color.

Hopefully, I have found a color recognition algorithm written in Java on the website uk-dave.com [source]. After analyzing this code, I noticed that one of the classes, called “ImageProcessor” was taking a BufferedImage as an input and outputting an image with the isolated colored ball and its coordinates.

It was very easy to implement in my system because I only had to convert it to a process, adding it two outputs, removing the useless functions and attributes and that was it.

Here is how uk-dave’s system is working :

  1. First, the user has to set intervals of hue, saturation and intensity corresponding to the color he wants to match
  2. Then, the function converts the input image to the HSI (hue, saturation, intensity)  color space.
  3. It then applies the hue, saturation and intensity intervals set by the user to the image, keeping only the pixels from the original image that match the interval set by the user.
  4. Finally, thanks to an algorithm called “Blob coloring”, it located the locations where there is the biggest bunch of pixels on the filtered image (pixels that are linked together).

After a lot of tweaking on the three different channels (hue, saturation and intensity), I finally managed to isolate the red ball :

So it is quite efficient but I have found one problem : every time the luminosity of the room we are in is changing, we have to modify a little bit the HSI intervals. One thing I have noticed is that the hue interval is almost independent of the luminosity of the room, after some research, it is because the hue represents the color in degrees (0° is red, 120° is green, 240° is blue and 360° is red again). With the images from the phone’s camera, I have found that the hue interval for the red ball is : 7° to 20° and for the blue ball, 183° to 236°.

Because the S and I channels are based on the luminosity, it could perhaps be interesting to use the luminosity captors from Lego (RGB or light captor) to set the values automatically based on the room luminosity.

The next step is to write the algorithm that will make the robot locate an seek for the balls.

Supervisor’s comment:

Well done; just what I expected to see given last week’s discussions.
The fact that you have also determined its limitations is also very good

NXT-Spy – First implementation of the robot control with image transfer in parallel

Here we are finally, after more than four months of work on this part of the project, I finally managed to put everything together :)

Here is the UML diagram showing how the different processes are interacting between each other :

So for this implementation, I just stuck together the work done on the camera transfer with the work done on the robot control and thanks to the parallel architecture, it was quite easy to achieve.

Here is a little video to illustrate it. Note : the frame-rate is not quite good but the video was done a little bit too far from the router.

[yframe url=’http://www.youtube.com/watch?v=tR_KIEMUDY4′]

Supervisor’s comment:

Arnaud sent me the bulk of the chapter that describes what he has been doing to explore the basic technology.
This was a joy to read; well structured and well organised but above all an informative and enjoyable read.
I made some suggestions concerning the expansion of some parts and a better structure.
He has found a web site uk-dave,com that contains a graphics processing class designed for a Lego robot to detect coloured balls.  It process data in exactly the form that his piple generates.  This means that he can spend time on the moving of balls to selected corners without having to build the graphics engine. Excellent!!

NXT-Spy – The first fully parallel based robot control is working

The result of these last few months work is starting to put itself together to create the final project! Since last weekend, I can remote control my Lego robot over my Wi-Fi network using only parallel processes.

Here is a little video showing that the hole thing works quite smoothly.

[yframe url=’http://www.youtube.com/watch?v=T6JYIA7YZ_M’]

So for this part of the project, the only part I hadn’t worked on was the Bluetooth channel abstraction.

On the Android phone, I have coded a very simple channel and Bluetooth abstraction: We call a class called BluetoothLinkClient that instantiates a Bluetooth connection. We can then create a BluetoothChannelOutput that will use BluetoothLinkClient to write bytes on the previously created socket. Then, on the robot, I did exactly the same thing : a BluetoothLinkServer and a BluetoothChannelInput. Finally, on the PC, I did a simple UI, with OnKey events to write on the Wi-Fi socket every time a key is pressed.

To control the two motors separately from the PC, two bytes are sent to the phone and then transmitted to the robot, one for each motor. It could be interesting in the future to add a third byte for the speed because for the moment, as we only have one byte per motor, we can only send speeds from 1 to 256 or -124 to + 124 if we want also to be able to go backward.

Supervisor’s comment:

The demo was excellent.  The camera is a little offset which makes it harder to move the robot towards a distant object as the robot gets closer to the object.  This is needed for the next part of the challenge, where the robot identifies different coloured balls and moves then to a predetermined area.
The frame rate is more than adequate, even though resolution is not high but is still adequate for the challenge.
What is required now is to obtain a graphics pipeline for Java similar to RoboRealm as per the paper from Denis Nicole, so that we can do the interesting bit; namely identifying different coloured balls, moving towards a selected ball, catching it and moving it towards a place where the same coloured balls are stored.  That is we are building a ball sorting machine using a Lego robot.  The robot does not need to be any more complex in terms of additional sensors etc because we can just move the ball by having a fixed ‘catcher’ on the front of the robot.
Inevitably Arnaud has not done any more writing other than to undertake the modifications I suggested last time. For next week he needs to write up what he has achieved so far.  This will be the bulk of a chapter entitled ‘Initial Technological Exploration’