Summary
In this chapter, you saw how to use the camera to detect a line and how to plot data showing what it found. You then saw how to take this data and put it into driving behavior so that the robot follows the line. You added to your OpenCV knowledge, and I showed you a sneaky way to put graphs into frames rendered on the camera stream output. You saw how to tune the PID to make the line following more accurate and how to ensure the robot stops predictably when it has lost the line.
In the next chapter, we will see how to communicate with our robot via a voice agent, Mycroft. You will add a microphone and speakers to a Raspberry Pi, then add speech recognition software. This will let us speak commands to a Raspberry Pi to send to the robot, and Mycroft will respond to let us know what it has done.