While I was testing the robot in its environment, the Bluetooth connection between the phone and the robot stopped working for a reason I still can’t figure. To solve that, I decided to go find on internet any person who worked with LeJOS NXJ from an Android phone over Bluetooth, and I found the perfect solution on a forum : NXTCommAndroid [source]. This developer actually ported completely all the comm package from the PC LeJOS API to Android. I implemented it and it worked perfectly on the first try! I also figured out that he posted it on the 9th January so less than 1 month ago.
Even further, this portage may allow me in the future to control completely the robot from the phone and avoid having any code other than the Bluetooth receiver on the robot…
In order to get the coordinates of the flag from the flag processing library (zxing), we are using a function called getResultPoints() on the result object outputted. The problem is that this function only outputs three points, from now on I supposed it was the top left, top right and bottom left coordinates of the flag, but actually, I just figured that it is more complicated than that.
In fact, the flag processing only provides us the coordinates of the “target style” points we can find on any QR code in a random order. The problem is that to be able to get precisely the flag position, we need first to find the coordinate of the fourth corner of the flag and then to get the smallest rectangle in which all the points can fit corresponding to the flag’s position.
That’s why today I added to the flag processing process a function that calculates and draw the two rectangle involved in the process as shown on the image below:
Assuming we have a triangle ABC made from the three QR points placed randomly (their order can change if we rotate the flag), the first step is to find the biggest side of the triangle (AC in this example), we then find its centre (D in this example). We then calculate the symmetric point of B centred on D (E in this example), it is the fourth point we were looking for and we can now calculate the coordinates of the red rectangle quite easily thanks to the java.awt.Rectangle add function (a function that adds points to a Rectangle object and outputs the smallest rectangle that includes all the points passed to it).
Now, the flag processing is more accurate and works for any inclination of the flag, an other good thing is that now it outputs a java.awt.Rectangle instead of an array of points.
After done all that on the flag processing, I thought that it could be interesting if the coloured ball processing could also output a Rectangle object instead of ukdave’s BoundingBox object. BoundingBox and java.awt.Rectangle being quite similar, I quickly did the modifications in the coloured ball processing process code and so now everything is more standard: the two image processing coordinates output are a java.awt.Rectangle object.
After 4 hours of testing, and of horrible-to-debug network channel dealdlocks, I have finally managed to create my pause button that allow us to restart the experiment by only restarting the PC client.
I did that because I had troubles to close my net channels on the phone and so every time I wanted to try my new control algorithm, I had to restart the robot and restart two times the phone!
This step will be crucial for all the upcoming in-deep developpment of the control algorithm because now, the phone and the robot won’t have to be restarted any more. The two devices will stay in stand-by mode every time I press the pause button allowing me to change my algorithm and re-launch the PC client without having to touch to the phone or to the robot.
You should have had a better network diagram to start with!!!
Even so this is a real achievement because stopping part of a distributed network to then resume having changed one part and then restarting that asynchronously is really hard – well done.
The justification is absolutely correct and the fact that you built pause before attempting the actual control alogorithm is so much more impressive; showing great forethought!!
He informed me that he has an algorithm working but it required better isolation of ball colours and signs. The obvious thing would be a sheet, probably white which has walls to reduce the interference from other objects in the environment.
Over the holidays he intends to build the arm that ‘captures’ the balls but it is a holiday so he may be unable due to external circumstances!!
A lot of quality work this week:
– First, because I was fed up having to close the application on the three devices every times I wanted to restart the experiment, I decided to add a “stop” button that spreads a “stop” message to the two remote devices (phone and robot) and that has for effect to stop the application running on these devices automatically.
– Then, I removed the “start the server” button on the phone because it was not very useful anymore: the server can start automatically, when we start the phone’s application
– I finally revised the architecture of the program running on the PC:
* first of all, I have shrunk all the flag processing processes to one process, indeed, there is no point analyzing the same image with the same algorithm and the same parameters two times. So instead, I added three outputs to the new flag processing process (one for each flag : red, blue and home).
* I have also removed the multiplexer: It was here to avoid processing images we didn’t needed, but it is just too complicated to manage and the three image processing processes are not that heavy to run. It also simplified a lot the diagram by removing the potential deadlock that we could see in the first architecture.
During this week, I have also started writing the state based “artificial intelligence” that will drive the robot. I have only written the ball following algorithm so far, it is still not perfect but I already have got some interesting results…
This is a much improved architecture that follows client server principles more closely. It makes good sense.
Pleased to see that you are already starting to find and seek a ball.
During the week we discussed how he might “capture” a ball so that it can be moved around more easily. This will have to wait until Christmas when he returns home and has his full Lego kit available to play with.
More coding is the name of the game to play next week.
Here is the first diagram I have done to describe how the AI that will drive the robot will work.
By doing this diagram, I have also thought about a solution for a problem highlighted by Kevin, my second marker:
How to avoid the robot from chasing the balls it has already sorted from the balls that still needs to be sorted.
The solution I have found is to create a third flag that we will call “home” flag that will be placed at the robot’s starting position. Thanks to this flag, the robot will be able to go back to its initial position every time it has sorted a ball.
The plan is very sensible and is realistic with respect to what has been done and what needs to be done.
The control algorithm also seems to capture what needs to be done. I like the idea of the Home base and the way in which it can resolve problems with previously sorted balls.
I think a continuous 5 degree turn in one direction might be better replaced by a wiggle and this could provide a comparison of techniques which could be measured in the evaluation.
User interface to remember the hue and Saturation and Intensity intervals for the different object, balls and flags has been constructed and is really cool. It has taken a lot of work because such interfaces always do! The fact that it is a JCSP-awt application is even nicer!! It might be worth a little aside in the report on this aspect He also wrote an active swing slider control!!!