NXT-Spy – The robot is working… almost!

All the controller has now been tested. The verdict is that it is behaving correctly but it is very slow to operate. In order to fix the object following algorithm, I created a graph on excel to have a better idea of how the robot was behaving based on the position of the ball on the screen

Supervisor’s comment:

I read and commented on the draft report and made a number of structural suggestions.  Including the need for an Introductory chapter and also a conclusion chapter.  We have the meat of the dissertation but not the book ends!

NXT-Spy – A new year = A new robot

This new robot has got an amazing working arm and it has a new phone holder that allows us to put the phone way lower than before and to have a centered camera (there was an issue with the fact that the camera was on one of the side of the robot)

Supervisor’s comment:

The newly engineered robot is able to pick up and deposit a ball extremely well.  I am most impressed.  The interesting bit in fact is the release of the ball and this appears to have been engineered very well.
The environment in which the robot will operate has to be carefully designed to fit with the operational characteristics of the robot.
The next task is to finish the control algorithm!
Write a chapter on the engineering of the robot and its environment

NXT-Spy – First diagram for the robot control algorithm

Here is the first diagram I have done to describe how the AI that will drive the robot will work.

By doing this diagram, I have also thought about a solution for a problem highlighted by Kevin, my second marker:
How to avoid the robot from chasing the balls it has already sorted from the balls that still needs to be sorted.
The solution I have found is to create a third flag that we will call “home” flag that will be placed at the robot’s starting position. Thanks to this flag, the robot will be able to go back to its initial position every time it has sorted a ball.

Supervisor’s comment:

The plan is very sensible and is realistic with respect to what has been done and what needs to be done.
The control algorithm also seems to capture what needs to be done.  I like the idea of the Home base and the way in which it can resolve problems with previously sorted balls.
I think a continuous 5 degree turn in one direction might be better replaced by a wiggle and this could provide a comparison of techniques which could be measured in the evaluation.
User interface to remember the hue and Saturation and Intensity intervals for the different object, balls and flags has been constructed and is really cool.  It has taken a lot of work because such interfaces always do!  The fact that it is a JCSP-awt application is even nicer!!  It might be worth a little aside in the report on this aspect  He also wrote an active swing slider control!!!

NXT-Spy – Let’s put the all the image processing processes together

To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.

So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.

The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.

The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.

After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.

And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).

After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels

So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.

Supervisor’s comment:

OK so the architecture is designed and coding is progressing well
Comment:  Its really easy because I am using processes to undertake the actions!