Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Scala Machine Learning Projects

You're reading from   Scala Machine Learning Projects Build real-world machine learning and deep learning projects with Scala

Arrow left icon
Product type Paperback
Published in Jan 2018
Publisher Packt
ISBN-13 9781788479042
Length 470 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Md. Rezaul Karim Md. Rezaul Karim
Author Profile Icon Md. Rezaul Karim
Md. Rezaul Karim
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Analyzing Insurance Severity Claims FREE CHAPTER 2. Analyzing and Predicting Telecommunication Churn 3. High Frequency Bitcoin Price Prediction from Historical and Live Data 4. Population-Scale Clustering and Ethnicity Prediction 5. Topic Modeling - A Better Insight into Large-Scale Texts 6. Developing Model-based Movie Recommendation Engines 7. Options Trading Using Q-learning and Scala Play Framework 8. Clients Subscription Assessment for Bank Telemarketing using Deep Neural Networks 9. Fraud Analytics Using Autoencoders and Anomaly Detection 10. Human Activity Recognition using Recurrent Neural Networks 11. Image Classification using Convolutional Neural Networks 12. Other Books You May Enjoy

Autoencoders and unsupervised learning


Autoencoders are artificial neural networks capable of learning efficient representations of the input data without any supervision (that is, the training set is unlabeled). This coding, typically, has a much lower dimensionality than the input data, making autoencoders useful for dimensionality reduction. More importantly, autoencoders act as powerful feature detectors, and they can be used for unsupervised pre-training of deep neural networks.

Working principles of an autoencoder

An autoencoder is a network with three or more layers, where the input layer and the output layer have the same number of neurons, and the intermediate (hidden) layers have a lower number of neurons. The network is trained to simply reproduce in output, for each input data, the same pattern of activity in the input. The remarkable aspect of the problem is that, due to the lower number of neurons in the hidden layer, if the network can learn from examples, and generalize to...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime