Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning with Swift

You're reading from   Machine Learning with Swift Artificial Intelligence for iOS

Arrow left icon
Product type Paperback
Published in Feb 2018
Publisher Packt
ISBN-13 9781787121515
Length 378 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Jojo Moolayil Jojo Moolayil
Author Profile Icon Jojo Moolayil
Jojo Moolayil
Oleksandr Baiev Oleksandr Baiev
Author Profile Icon Oleksandr Baiev
Oleksandr Baiev
Alexander Sosnovshchenko Alexander Sosnovshchenko
Author Profile Icon Alexander Sosnovshchenko
Alexander Sosnovshchenko
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Getting Started with Machine Learning FREE CHAPTER 2. Classification – Decision Tree Learning 3. K-Nearest Neighbors Classifier 4. K-Means Clustering 5. Association Rule Learning 6. Linear Regression and Gradient Descent 7. Linear Classifier and Logistic Regression 8. Neural Networks 9. Convolutional Neural Networks 10. Natural Language Processing 11. Machine Learning Libraries 12. Optimizing Neural Networks for Mobile Devices 13. Best Practices

Training the network


The most common way to train NNs these days is with a backward propagation of errors algorithm, or backpropagation (often backprop for short). As we have seen already, individual neurons remind us of linear or logistic regression a lot, so it should not come as a surprise that backpropagation usually comes together with our old friend the gradient descent algorithm. NN training works in the following way:

  • Forward pass—input is presented to the layer and the transformations are applied to it layer by layer until the prediction is outputted on the last layer.
  • Loss computation—the prediction is compared to the ground truth, and an error value is calculated for each neuron of the output layer using the loss function J.
  • The errors are then propagated backward (backpropagation), such that each neuron has an error associated to it, proportional to its contribution to the output.
  • Weights (w) are updated using one step of gradient descent. The gradient of the loss function 
     is calculated...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime