NXT-Spy – Fixes on the flag coordinates processing and of the output of the coloured ball processing

In order to get the coordinates of the flag from the flag processing library (zxing), we are using a function called getResultPoints() on the result object outputted. The problem is that this function only outputs three points, from now on I supposed it was the top left, top right and bottom left coordinates of the flag, but actually, I just figured that it is more complicated than that.

In fact, the flag processing only provides us the coordinates of the “target style” points we can find on any QR code in a random order. The problem is that to be able to get precisely the flag position, we need first to find the coordinate of the fourth corner of the flag and then to get the smallest rectangle in which all the points can fit corresponding to the flag’s position.
That’s why today I added to the flag processing process a function that calculates and draw the two rectangle involved in the process as shown on the image below:

Assuming we have a triangle ABC made from the three QR points placed randomly (their order can change if we rotate the flag), the first step is to find the biggest side of the triangle (AC in this example), we then find its centre (D in this example). We then calculate the symmetric point of B centred on D (E in this example), it is the fourth point we were looking for and we can now calculate the coordinates of the red rectangle quite easily thanks to the java.awt.Rectangle add function (a function that adds points to a Rectangle object and outputs the smallest rectangle that includes all the points passed to it).

Now, the flag processing is more accurate and works for any inclination of the flag, an other good thing is that now it outputs a java.awt.Rectangle instead of an array of points.
After done all that on the flag processing, I thought that it could be interesting if the coloured ball processing could also output a Rectangle object instead of ukdave’s BoundingBox object. BoundingBox and java.awt.Rectangle being quite similar, I quickly did the modifications in the coloured ball processing process code and so now everything is more standard: the two image processing coordinates output are a java.awt.Rectangle object.

NXT-Spy – Pause button working :)

After 4 hours of testing, and of horrible-to-debug network channel dealdlocks, I have finally managed to create my pause button that allow us to restart the experiment by only restarting the PC client.

I did that because I had troubles to close my net channels on the phone and so every time I wanted to try my new control algorithm, I had to restart the robot and restart two times the phone!

This step will be crucial for all the upcoming in-deep developpment of the control algorithm because now, the phone and the robot won’t have to be restarted any more. The two devices will stay in stand-by mode every time I press the pause button allowing me to change my algorithm and re-launch the PC client without having to touch to the phone or to the robot.

Supervisor’s comment:

You should have had a better network diagram to start with!!!
Even so this is a real achievement because stopping part of a distributed network to then resume having changed one part and then restarting that asynchronously is really hard – well done.
The justification is absolutely correct and the fact that you built pause before attempting the actual control alogorithm is so much more impressive; showing great forethought!!
He informed me that he has an algorithm working but it required better isolation of ball colours and signs.  The obvious thing would be a sheet, probably white which has walls to reduce the interference from other objects in the environment.
Over the holidays he intends to build the arm that ‘captures’ the balls but it is a holiday so he may be unable due to external circumstances!!

NXT-Spy – First diagram for the robot control algorithm

Here is the first diagram I have done to describe how the AI that will drive the robot will work.

By doing this diagram, I have also thought about a solution for a problem highlighted by Kevin, my second marker:
How to avoid the robot from chasing the balls it has already sorted from the balls that still needs to be sorted.
The solution I have found is to create a third flag that we will call “home” flag that will be placed at the robot’s starting position. Thanks to this flag, the robot will be able to go back to its initial position every time it has sorted a ball.

Supervisor’s comment:

The plan is very sensible and is realistic with respect to what has been done and what needs to be done.
The control algorithm also seems to capture what needs to be done.  I like the idea of the Home base and the way in which it can resolve problems with previously sorted balls.
I think a continuous 5 degree turn in one direction might be better replaced by a wiggle and this could provide a comparison of techniques which could be measured in the evaluation.
User interface to remember the hue and Saturation and Intensity intervals for the different object, balls and flags has been constructed and is really cool.  It has taken a lot of work because such interfaces always do!  The fact that it is a JCSP-awt application is even nicer!!  It might be worth a little aside in the report on this aspect  He also wrote an active swing slider control!!!

NXT-Spy – Let’s put the all the image processing processes together

To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.

So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.

The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.

The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.

After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.

And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).

After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels

So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.

Supervisor’s comment:

OK so the architecture is designed and coding is progressing well
Comment:  Its really easy because I am using processes to undertake the actions!