Today, I realised something: for the moment, the Controller is analysing the coordinates of provided by the different coordinates buffer processes, so the controller can be analysing several times the same image as long as no new images have been processed.
But because we have a big lag with the receiving of the images, it might not be the appropriate solution because it is not precise enough.
So the solution might be better if all the coordinates buffer become blocking as long as no new coordinates have arrived. By doing that, the controller would do an action to control the robot only when an image processor outputs a new coordinate.
I will try this out next Wednesday when I will have access to the robot’s environment again.
So this has been a big week! As usual in things that involve robots you go two steps forward and one back.
The fact that the android/lejosNXT problem have been solved for Bluetooth was very fortunate! To work first time was just amazing. Send a note of thanks to the the person that posted it!!
I am not surprised that the image processing is causing a problem due to frame rate that is possible with his version of the phone.
So therefore the robot behaviour has to take account of this limitation. That is wiggle slowly to find target by processing each image until a target is found.; then moving more quickly towards target. The robot will have to move more slowly – so what!!
The robot needs some engineering to counterbalance the offset weight of the phone
Today, I have been able to test for the first time the robot’s controller in an ideal environment. This experiment cannot really be called as a success as I have faced a very disturbing issue: even when we are working in the best conditions (for this experiment, I worked with a Wi-Fi ad-hoc connection between the phone and the PC instead of using a router between the two), the images rate is still too low to be able to easily control the robot.
To solve this problem with the current configuration, I think that the best way is either to try to predict the position of the ball based on the speed of the robot or to simply slow down the speed of the robot to a minimum.
A second minor problem is that as predicted, the robot is too heavy on the phone’s side, the solution is obviously to put a weight on the other side of it to counterbalance.
But there are not only bad news: automatic control of the robotic arm works like a charm: as soon as the ball is touching the arm, it lifts it.
While I was testing the robot in its environment, the Bluetooth connection between the phone and the robot stopped working for a reason I still can’t figure. To solve that, I decided to go find on internet any person who worked with LeJOS NXJ from an Android phone over Bluetooth, and I found the perfect solution on a forum : NXTCommAndroid [source]. This developer actually ported completely all the comm package from the PC LeJOS API to Android. I implemented it and it worked perfectly on the first try! I also figured out that he posted it on the 9th January so less than 1 month ago.
Even further, this portage may allow me in the future to control completely the robot from the phone and avoid having any code other than the Bluetooth receiver on the robot…
In order to get the coordinates of the flag from the flag processing library (zxing), we are using a function called getResultPoints() on the result object outputted. The problem is that this function only outputs three points, from now on I supposed it was the top left, top right and bottom left coordinates of the flag, but actually, I just figured that it is more complicated than that.
In fact, the flag processing only provides us the coordinates of the “target style” points we can find on any QR code in a random order. The problem is that to be able to get precisely the flag position, we need first to find the coordinate of the fourth corner of the flag and then to get the smallest rectangle in which all the points can fit corresponding to the flag’s position.
That’s why today I added to the flag processing process a function that calculates and draw the two rectangle involved in the process as shown on the image below:
Assuming we have a triangle ABC made from the three QR points placed randomly (their order can change if we rotate the flag), the first step is to find the biggest side of the triangle (AC in this example), we then find its centre (D in this example). We then calculate the symmetric point of B centred on D (E in this example), it is the fourth point we were looking for and we can now calculate the coordinates of the red rectangle quite easily thanks to the java.awt.Rectangle add function (a function that adds points to a Rectangle object and outputs the smallest rectangle that includes all the points passed to it).
Now, the flag processing is more accurate and works for any inclination of the flag, an other good thing is that now it outputs a java.awt.Rectangle instead of an array of points.
After done all that on the flag processing, I thought that it could be interesting if the coloured ball processing could also output a Rectangle object instead of ukdave’s BoundingBox object. BoundingBox and java.awt.Rectangle being quite similar, I quickly did the modifications in the coloured ball processing process code and so now everything is more standard: the two image processing coordinates output are a java.awt.Rectangle object.
This week, I have written 3 pages on the robot engineering and two pages about image recognition for the dissertation and finished to code the ball sorting controller, I now have to test it in its environment…
OK he was on the 48 hours let’s build a game!!!!!!!!! which he seems to have done quite well.
So now it is back to the project!!