Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Artificial Intelligence for Big Data

You're reading from  Artificial Intelligence for Big Data

Product type Book
Published in May 2018
Publisher Packt
ISBN-13 9781788472173
Pages 384 pages
Edition 1st Edition
Languages
Authors (2):
Anand Deshpande Anand Deshpande
Profile icon Anand Deshpande
Manish Kumar Manish Kumar
Profile icon Manish Kumar
View More author details
Toc

Table of Contents (19) Chapters close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Big Data and Artificial Intelligence Systems 2. Ontology for Big Data 3. Learning from Big Data 4. Neural Network for Big Data 5. Deep Big Data Analytics 6. Natural Language Processing 7. Fuzzy Systems 8. Genetic Programming 9. Swarm Intelligence 10. Reinforcement Learning 11. Cyber Security 12. Cognitive Computing 1. Other Books You May Enjoy Index

Overfitting


As we have seen in the previous sections, gradient descent and backpropagation are iterative algorithms. One forward and corresponding backward pass through all the training data is called an epoch. With each epoch, the model is trained and the weights are adjusted for minimizing error. In order to test the accuracy of the model, as a common practice, we split the training data into the training set and the validation set.

The training set is used for generating the model that represents a hypothesis based on the historical data that contains the target variable value with respect to the independent or input variables. The validation set is used to test the efficiency of the hypothesis function or the trained model for the new training samples.

Across multiple epochs we typically observe the following pattern: 

Figure 4.17: Graph of overfitting model 

As we train our neural network through a number of epochs, the loss function error is optimized with every epoch and the cumulative...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime