NXT-Spy – Stop button, architecture diagram revised and first ball following algorithm

A lot of quality work this week:
– First, because I was fed up having to close the application on the three devices every times I wanted to restart the experiment, I decided to add a “stop” button that spreads a “stop” message to the two remote devices (phone and robot) and that has for effect to stop the application running on these devices automatically.
– Then, I removed the “start the server” button on the phone because it was not very useful anymore: the server can start automatically, when we start the phone’s application
– I finally revised the architecture of the program running on the PC:
* first of all, I have shrunk all the flag processing processes to one process, indeed, there is no point analyzing the same image with the same algorithm and the same parameters two times. So instead, I added three outputs to the new flag processing process (one for each flag : red, blue and home).
* I have also removed the multiplexer: It was here to avoid processing images we didn’t needed, but it is just too complicated to manage and the three image processing processes are not that heavy to run. It also simplified a lot the diagram by removing the potential deadlock that we could see in the first architecture.

During this week, I have also started writing the state based “artificial intelligence” that will drive the robot. I have only written the ball following algorithm so far, it is still not perfect but I already have got some interesting results…

Supervisor’s comment:

This is a much improved architecture that follows client server principles more closely.  It makes good sense.
Pleased to see that you are already starting to find and seek a ball.
During the week we discussed how he might “capture” a ball so that it can be moved around more easily.  This will have to wait until Christmas when he returns home and has his full Lego kit available to play with.
More coding is the name of the game to play next week.

NXT-Spy – First diagram for the robot control algorithm

Here is the first diagram I have done to describe how the AI that will drive the robot will work.

By doing this diagram, I have also thought about a solution for a problem highlighted by Kevin, my second marker:
How to avoid the robot from chasing the balls it has already sorted from the balls that still needs to be sorted.
The solution I have found is to create a third flag that we will call “home” flag that will be placed at the robot’s starting position. Thanks to this flag, the robot will be able to go back to its initial position every time it has sorted a ball.

Supervisor’s comment:

The plan is very sensible and is realistic with respect to what has been done and what needs to be done.
The control algorithm also seems to capture what needs to be done.  I like the idea of the Home base and the way in which it can resolve problems with previously sorted balls.
I think a continuous 5 degree turn in one direction might be better replaced by a wiggle and this could provide a comparison of techniques which could be measured in the evaluation.
User interface to remember the hue and Saturation and Intensity intervals for the different object, balls and flags has been constructed and is really cool.  It has taken a lot of work because such interfaces always do!  The fact that it is a JCSP-awt application is even nicer!!  It might be worth a little aside in the report on this aspect  He also wrote an active swing slider control!!!

NXT-Spy – Let’s put the all the image processing processes together

To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.

So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.

The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.

The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.

After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.

And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).

After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels

So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.

Supervisor’s comment:

OK so the architecture is designed and coding is progressing well
Comment:  Its really easy because I am using processes to undertake the actions!