Today, I realised something: for the moment, the Controller is analysing the coordinates of provided by the different coordinates buffer processes, so the controller can be analysing several times the same image as long as no new images have been processed.
But because we have a big lag with the receiving of the images, it might not be the appropriate solution because it is not precise enough.
So the solution might be better if all the coordinates buffer become blocking as long as no new coordinates have arrived. By doing that, the controller would do an action to control the robot only when an image processor outputs a new coordinate.
I will try this out next Wednesday when I will have access to the robot’s environment again.
So this has been a big week! As usual in things that involve robots you go two steps forward and one back.
The fact that the android/lejosNXT problem have been solved for Bluetooth was very fortunate! To work first time was just amazing. Send a note of thanks to the the person that posted it!!
I am not surprised that the image processing is causing a problem due to frame rate that is possible with his version of the phone.
So therefore the robot behaviour has to take account of this limitation. That is wiggle slowly to find target by processing each image until a target is found.; then moving more quickly towards target. The robot will have to move more slowly – so what!!
The robot needs some engineering to counterbalance the offset weight of the phone