Here we are finally, after more than four months of work on this part of the project, I finally managed to put everything together 🙂
Here is the UML diagram showing how the different processes are interacting between each other :
So for this implementation, I just stuck together the work done on the camera transfer with the work done on the robot control and thanks to the parallel architecture, it was quite easy to achieve.
Here is a little video to illustrate it. Note : the frame-rate is not quite good but the video was done a little bit too far from the router.
Arnaud sent me the bulk of the chapter that describes what he has been doing to explore the basic technology.
This was a joy to read; well structured and well organised but above all an informative and enjoyable read.
I made some suggestions concerning the expansion of some parts and a better structure.
He has found a web site uk-dave,com that contains a graphics processing class designed for a Lego robot to detect coloured balls. It process data in exactly the form that his piple generates. This means that he can spend time on the moving of balls to selected corners without having to build the graphics engine. Excellent!!