NXT-Spy – A stable connection between the phone and the robot (finally!)

While I was testing the robot in its environment, the Bluetooth connection between the phone and the robot stopped working for a reason I still can’t figure. To solve that, I decided to go find on internet any person who worked with LeJOS NXJ from an Android phone over Bluetooth, and I found the perfect solution on a forum : NXTCommAndroid [source]. This developer actually ported completely all the comm package from the PC LeJOS API to Android. I implemented it and it worked perfectly on the first try! I also figured out that he posted it on the 9th January so less than 1 month ago.

Even further, this portage may allow me in the future to control completely the robot from the phone and avoid having any code other than the Bluetooth receiver on the robot…

NXT-Spy – Pause button working :)

After 4 hours of testing, and of horrible-to-debug network channel dealdlocks, I have finally managed to create my pause button that allow us to restart the experiment by only restarting the PC client.

I did that because I had troubles to close my net channels on the phone and so every time I wanted to try my new control algorithm, I had to restart the robot and restart two times the phone!

This step will be crucial for all the upcoming in-deep developpment of the control algorithm because now, the phone and the robot won’t have to be restarted any more. The two devices will stay in stand-by mode every time I press the pause button allowing me to change my algorithm and re-launch the PC client without having to touch to the phone or to the robot.

Supervisor’s comment:

You should have had a better network diagram to start with!!!
Even so this is a real achievement because stopping part of a distributed network to then resume having changed one part and then restarting that asynchronously is really hard – well done.
The justification is absolutely correct and the fact that you built pause before attempting the actual control alogorithm is so much more impressive; showing great forethought!!
He informed me that he has an algorithm working but it required better isolation of ball colours and signs.  The obvious thing would be a sheet, probably white which has walls to reduce the interference from other objects in the environment.
Over the holidays he intends to build the arm that ‘captures’ the balls but it is a holiday so he may be unable due to external circumstances!!

NXT-Spy – Stop button, architecture diagram revised and first ball following algorithm

A lot of quality work this week:
– First, because I was fed up having to close the application on the three devices every times I wanted to restart the experiment, I decided to add a “stop” button that spreads a “stop” message to the two remote devices (phone and robot) and that has for effect to stop the application running on these devices automatically.
– Then, I removed the “start the server” button on the phone because it was not very useful anymore: the server can start automatically, when we start the phone’s application
– I finally revised the architecture of the program running on the PC:
* first of all, I have shrunk all the flag processing processes to one process, indeed, there is no point analyzing the same image with the same algorithm and the same parameters two times. So instead, I added three outputs to the new flag processing process (one for each flag : red, blue and home).
* I have also removed the multiplexer: It was here to avoid processing images we didn’t needed, but it is just too complicated to manage and the three image processing processes are not that heavy to run. It also simplified a lot the diagram by removing the potential deadlock that we could see in the first architecture.

During this week, I have also started writing the state based “artificial intelligence” that will drive the robot. I have only written the ball following algorithm so far, it is still not perfect but I already have got some interesting results…

Supervisor’s comment:

This is a much improved architecture that follows client server principles more closely.  It makes good sense.
Pleased to see that you are already starting to find and seek a ball.
During the week we discussed how he might “capture” a ball so that it can be moved around more easily.  This will have to wait until Christmas when he returns home and has his full Lego kit available to play with.
More coding is the name of the game to play next week.

NXT-Spy – Let’s put the all the image processing processes together

To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.

So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.

The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.

The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.

After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.

And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).

After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels

So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.

Supervisor’s comment:

OK so the architecture is designed and coding is progressing well
Comment:  Its really easy because I am using processes to undertake the actions!