To be able to write my pseudo AI that will control the robot automatically based on what it sees, I needed to put all the filters I have written during the last two weeks. So I decided to first design an architecture suitable for the project.
So to read this diagram, you have to start from the bottom to the top.First, we decode the YUV image to RGB thanks to convert YUV. The image is then sent to the UI and to a demultiplexer that will just dispatch the image to the processes that needs it.
The demultiplexer is reading the current state of the program (seeking blue ball, seeking red flag, idle, etc…) and sends the image to the processes that needs it. For example, if the state is “find a ball”, then the demultiplexer will send the image to the red ball processor and to the blue ball processor. This avoids to analyze images that we won’t need.
The ball image processors have a “configure” input that allows the user to configure the HSI intervals. It is not needed for the flag processors because they are working with black and white images that don’t need any human calibration.
After the image is processed, it is sent to the UI (for the user to have a feedback when he is calibrating the ball processors). The processors also outputs the coordinates of the object they have identified as the one they are looking for (it will allow the robot to always center the object the robot is looking for so that it won’t loose it). The coordinates buffers are only here to avoid any stuck if the coordinates are not read quickly.
And finally, on top, there is the controller that will have access to all the information it needs to run a state based AI (The coordinates of the object it needs and a direct access to the robot controls).
After I did this analysis, I coded most of it, and it works so far : It is not completely tested as the controller is not coded yet neither the configuration UI . I did for the moment a total of 10 classes, 15 processes and 29 channels
So as you guessed, the next step is to code the controller’s algorithm that will drive the robot where it needs to go.
OK so the architecture is designed and coding is progressing well
Comment: Its really easy because I am using processes to undertake the actions!