We can also vectorize our training labels, which simply helps our network handle our data better. You can think of vectorization as an efficient way to represent information to computers. Just like humans are not very good at performing computations using Roman numerals, computers are notoriously worse off when dealing with unvectorized data. In the following code, we are transforming our labels into NumPy arrays that contain 32-bit floating-point arithmetic values of either 0.0 or 1.0:
y_train= np.asarray(y_train).astype('float32')
y_test = np.asarray(y_test).astype('float32')
Finally, we have our tensor, ready to be consumed by a neural network. This 2D tensor is essentially 25,000 stacked vectors, each with its own label. All that is left to do is build our network.