Neural networks and decision boundaries
We have covered in the previous section that, by adding hidden units to a neural network, we can approximate the target function more closely. However, we haven't applied it to a classification problem. To do this, we will generate data with a nonlinear target value and look at how the decision surface changes once we add hidden units to our architecture. Let's see the universal approximation theorem at work! First, let's generate some non-linearly separable data with two features, set up our neural network architectures, and see how our decision boundaries change with each architecture:
%matplotlib inline from sknn.mlp import Classifier, Layer from sklearn import preprocessing import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from itertools import product X,y= datasets.make_moons(n_samples=500, noise=.2, random_state=222) from sklearn.datasets import make_blobs net1 = Classifier( layers=[ Layer("Softmax"...