As I already pointed out, the robot has some issues to recognize the 2d bar-codes while it is moving, so I had the idea to associate a coloured object (a green paper) to every bar-code and added all the necessary processes to the computer application to be able to recognize the green colour. To make a long story short, when the robot wants to find the blue zone, it is looking for the “B” bar-code, as soon as it finds it, it follows the green coloured object that is just next to it.
By doing that, the robot won’t have to recognize a bar-code properly on every image because the bar-code would only be used once for identification purpose. And because after the identification the robot is following a coloured object, it is way more efficient (it was demonstrated before that the colour recognition is very stable and efficient even on blurry images).

The problem is that even this method was not stable enough to be very reliable, indeed, it happened that the robot still did not recognize the bar-code in the first place and so did not tried to follow the green object neither.
So I decided to change a key element of the problem which is the bar-code recognition and to remove completely the bar-codes from the equation and replacing them by pieces of paper of different colours.
Indeed, if my colour recognition algorithm can differentiate perfectly the red and the blue, it should also be able to differentiate the green, yellow and violet colours (the three colours with a hue different enough).

I spent a couples of hours replacing all the flag recognition by coloured zone recognition and adapting the UI to configure them properly. Now all the new set-up needs to be tested but I am very confident as I already tested it in out of the environment and it looks promising, but because I won’t be at uni tomorrow, the actual testing will probably wait until Wednesday.

Supervisor’s comment:
Yet another clever solution to a nagging problem well done.
We discussed his poster design. I made a few suggestions.