This chapter presented an overview of the classic perceptron model. We covered the theoretical model and its implementation in Python for both linearly and non-linearly separable datasets. At this point, you should feel confident that you know enough about the perceptron that you can implement it yourself. You should be able to recognize the perceptron model in the context of a neuron. Also, you should now be able to implement a pocket algorithm and early termination strategies in a perceptron, or any other learning algorithm in general.
Since the perceptron is the most essential element that paved the way for deep neural networks, after we have covered it here, the next step is to go to Chapter 6, Training Multiple Layers of Neurons. In that chapter, you will be exposed to the challenges of deep learning using the multi-layer perceptron algorithm, such as gradient descent techniques for error minimization, and hyperparameter optimization to achieve generalization. But before...