As I already pointed out, the robot has some issues to recognize the 2d bar-codes while it is moving, so I had the idea to associate a coloured object (a green paper) to every bar-code and added all the necessary processes to the computer application to be able to recognize the green colour. To make a long story short, when the robot wants to find the blue zone, it is looking for the “B” bar-code, as soon as it finds it, it follows the green coloured object that is just next to it.
By doing that, the robot won’t have to recognize a bar-code properly on every image because the bar-code would only be used once for identification purpose. And because after the identification the robot is following a coloured object, it is way more efficient (it was demonstrated before that the colour recognition is very stable and efficient even on blurry images).
The problem is that even this method was not stable enough to be very reliable, indeed, it happened that the robot still did not recognize the bar-code in the first place and so did not tried to follow the green object neither.
So I decided to change a key element of the problem which is the bar-code recognition and to remove completely the bar-codes from the equation and replacing them by pieces of paper of different colours.
Indeed, if my colour recognition algorithm can differentiate perfectly the red and the blue, it should also be able to differentiate the green, yellow and violet colours (the three colours with a hue different enough).
I spent a couples of hours replacing all the flag recognition by coloured zone recognition and adapting the UI to configure them properly. Now all the new set-up needs to be tested but I am very confident as I already tested it in out of the environment and it looks promising, but because I won’t be at uni tomorrow, the actual testing will probably wait until Wednesday.
Yet another clever solution to a nagging problem well done.
We discussed his poster design. I made a few suggestions.
In order to get the coordinates of the flag from the flag processing library (zxing), we are using a function called getResultPoints() on the result object outputted. The problem is that this function only outputs three points, from now on I supposed it was the top left, top right and bottom left coordinates of the flag, but actually, I just figured that it is more complicated than that.
In fact, the flag processing only provides us the coordinates of the “target style” points we can find on any QR code in a random order. The problem is that to be able to get precisely the flag position, we need first to find the coordinate of the fourth corner of the flag and then to get the smallest rectangle in which all the points can fit corresponding to the flag’s position.
That’s why today I added to the flag processing process a function that calculates and draw the two rectangle involved in the process as shown on the image below:
Assuming we have a triangle ABC made from the three QR points placed randomly (their order can change if we rotate the flag), the first step is to find the biggest side of the triangle (AC in this example), we then find its centre (D in this example). We then calculate the symmetric point of B centred on D (E in this example), it is the fourth point we were looking for and we can now calculate the coordinates of the red rectangle quite easily thanks to the java.awt.Rectangle add function (a function that adds points to a Rectangle object and outputs the smallest rectangle that includes all the points passed to it).
Now, the flag processing is more accurate and works for any inclination of the flag, an other good thing is that now it outputs a java.awt.Rectangle instead of an array of points.
After done all that on the flag processing, I thought that it could be interesting if the coloured ball processing could also output a Rectangle object instead of ukdave’s BoundingBox object. BoundingBox and java.awt.Rectangle being quite similar, I quickly did the modifications in the coloured ball processing process code and so now everything is more standard: the two image processing coordinates output are a java.awt.Rectangle object.
For the robot to recognize the drop zone (the place where it will drop the balls at), I thought about putting little flags 0n them with a black and white shape (to avoid any interference with the colored balls recognition).
My first idea was to work with 2D bar codes : a widely used open-source Java library called ZXing [source]. It can decode these two 2D bar code formats:
So I tried to encode the letter B and the letter R (for blue and red) in each of these formats.
QR code :
So these are the most simple 2D code I could generate and I am not really happy of the size and complexity of those. Even the most simple of them (the data matrix) is generating an image of a complexity of 10×10.
So I am still trying to find a simpler 2D decodable symbol.
Demonstrated that the QR matrix is probably the best solution because there are Java classes that detect it!
Arnaud was worried about size of flag relative to space balls working in. I suggested a space of no more than 1.5m square, which will be feasible. This will require quite large flags, a vertical phone perhaps lower down on the robot.
We also discussed his process architecture for ball recognition and robot control. It has an implicit deadlock due to its network BUT he can argue that it will not deadlock by presenting the appropriate case and the order in which inputs to the processes happen.
As usual he is making really good progress; very well done
The task for next week is to see how well the system can detect QR codes, at what distance and size and orientation of flag.