A lot of quality work this week:
– First, because I was fed up having to close the application on the three devices every times I wanted to restart the experiment, I decided to add a “stop” button that spreads a “stop” message to the two remote devices (phone and robot) and that has for effect to stop the application running on these devices automatically.
– Then, I removed the “start the server” button on the phone because it was not very useful anymore: the server can start automatically, when we start the phone’s application
– I finally revised the architecture of the program running on the PC:
* first of all, I have shrunk all the flag processing processes to one process, indeed, there is no point analyzing the same image with the same algorithm and the same parameters two times. So instead, I added three outputs to the new flag processing process (one for each flag : red, blue and home).
* I have also removed the multiplexer: It was here to avoid processing images we didn’t needed, but it is just too complicated to manage and the three image processing processes are not that heavy to run. It also simplified a lot the diagram by removing the potential deadlock that we could see in the first architecture.
During this week, I have also started writing the state based “artificial intelligence” that will drive the robot. I have only written the ball following algorithm so far, it is still not perfect but I already have got some interesting results…
This is a much improved architecture that follows client server principles more closely. It makes good sense.
Pleased to see that you are already starting to find and seek a ball.
During the week we discussed how he might “capture” a ball so that it can be moved around more easily. This will have to wait until Christmas when he returns home and has his full Lego kit available to play with.
More coding is the name of the game to play next week.