Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Machine Learning Systems with Python

You're reading from   Building Machine Learning Systems with Python Explore machine learning and deep learning techniques for building intelligent systems using scikit-learn and TensorFlow

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher
ISBN-13 9781788623223
Length 406 pages
Edition 3rd Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Luis Pedro Coelho Luis Pedro Coelho
Author Profile Icon Luis Pedro Coelho
Luis Pedro Coelho
Willi Richert Willi Richert
Author Profile Icon Willi Richert
Willi Richert
Matthieu Brucher Matthieu Brucher
Author Profile Icon Matthieu Brucher
Matthieu Brucher
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Getting Started with Python Machine Learning FREE CHAPTER 2. Classifying with Real-World Examples 3. Regression 4. Classification I – Detecting Poor Answers 5. Dimensionality Reduction 6. Clustering – Finding Related Posts 7. Recommendations 8. Artificial Neural Networks and Deep Learning 9. Classification II – Sentiment Analysis 10. Topic Modeling 11. Classification III – Music Genre Classification 12. Computer Vision 13. Reinforcement Learning 14. Bigger Data 15. Where to Learn More About Machine Learning 16. Other Books You May Enjoy

Autoencoders, or neural networks for dimensionality reduction

A little bit more than a decade ago, the main tool for dimensionality reduction with neural networks was Kohonen maps, or self-organizing maps (SOM). They were neural networks that would map data in a discrete, 1D-embedded space. Since then, with faster computers, it is now possible to use deep learning to create embedded spaces.

The trick is to have an intermediate layer that has fewer nodes than the input layer and an output layer that must reproduce the input layer. The data on this intermediate layer will give us the coordinates in an embedded space.

If we use regular dense layers without a specific activation function, we get a linear function from the input to the embedded layer to the output layer. More than one layer to the embedded layer will not change the result of the training and, as such, we get a linear...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime