A simple perceptron is a single-layered neural network. It uses the threshold activation function and, as proved by the Marvin Minsky paper, can solve only linearly separable problems. While this limits the applications of single layer perceptron to only linearly separable problems, it is still always amazing to see it learn.
Single layer perceptron
Getting ready
As perceptron uses the threshold activation function, we cannot use the TensorFlow optimizers to update weights. We will have to use the weight update rule:
Here is the learning rate. For programming simplicity, bias can be added as one additional weight with input fixed to +1. Then the preceding equation can be used to update both weights and biases simultaneously...