As planned in the project’s Gantt chart, this week was spent on writing the project report.
More importantly we discussed the Critical Evaluation that goes in the final chapter
This should cover things that were done that could have been done better eg QR and Data Matrix
Use of the Blog to generate reference lists which then enabled checking that stuff had been written so references could be cited.
POSTER prepare an outline poster we can discuss
All the controller has now been tested. The verdict is that it is behaving correctly but it is very slow to operate. In order to fix the object following algorithm, I created a graph on excel to have a better idea of how the robot was behaving based on the position of the ball on the screen
I read and commented on the draft report and made a number of structural suggestions. Including the need for an Introductory chapter and also a conclusion chapter. We have the meat of the dissertation but not the book ends!
A new functionality was added to the website in order to allow exporting all the different external links of each posts as a reference list.
Another functionality was added to the website to be able to export all the blog posts for the dissertation.
Today is a big day for this project, the robot now manages to find balls and to catch them on its own.
In fact, all last week problems are solved: A ball of Blu-Tack solved the balancing problem and the solution I thought about last week to solve the lag problem worked quite well, in fact, apart the fact that the robot is moving very slowly, it manages to find balls quite efficiently.
So here is the video that shows the success of the experiment (I accelerated the video of 200% because the robot is operating very slowly):
Today, I also did some testing in the white environment and figured out that Data Matrix 2D barcodes worked way better than the QR code as they are way less complex. I am even wandering why I chose QR codes the first time… maybe because it is more beautiful 🙂
Well done; how about finding the Blue and Red corners? They work most of the time but because the flag goes out of view it makes it hard to know when you have reached the corner.
Has also written up the report so far as he is not here next week. So he has done two week’s work in one!!!!
Today, I realised something: for the moment, the Controller is analysing the coordinates of provided by the different coordinates buffer processes, so the controller can be analysing several times the same image as long as no new images have been processed.
But because we have a big lag with the receiving of the images, it might not be the appropriate solution because it is not precise enough.
So the solution might be better if all the coordinates buffer become blocking as long as no new coordinates have arrived. By doing that, the controller would do an action to control the robot only when an image processor outputs a new coordinate.
I will try this out next Wednesday when I will have access to the robot’s environment again.
So this has been a big week! As usual in things that involve robots you go two steps forward and one back.
The fact that the android/lejosNXT problem have been solved for Bluetooth was very fortunate! To work first time was just amazing. Send a note of thanks to the the person that posted it!!
I am not surprised that the image processing is causing a problem due to frame rate that is possible with his version of the phone.
So therefore the robot behaviour has to take account of this limitation. That is wiggle slowly to find target by processing each image until a target is found.; then moving more quickly towards target. The robot will have to move more slowly – so what!!
The robot needs some engineering to counterbalance the offset weight of the phone