Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Machine Learning with TensorFlow.js

You're reading from   Hands-On Machine Learning with TensorFlow.js A guide to building ML applications integrated with web technology using the TensorFlow.js library

Arrow left icon
Product type Paperback
Published in Nov 2019
Publisher Packt
ISBN-13 9781838821739
Length 296 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Kai Sasaki Kai Sasaki
Author Profile Icon Kai Sasaki
Kai Sasaki
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Section 1: The Rationale of Machine Learning and the Usage of TensorFlow.js
2. Machine Learning for the Web FREE CHAPTER 3. Importing Pretrained Models into TensorFlow.js 4. TensorFlow.js Ecosystem 5. Section 2: Real-World Applications of TensorFlow.js
6. Polynomial Regression 7. Classification with Logistic Regression 8. Unsupervised Learning 9. Sequential Data Analysis 10. Dimensionality Reduction 11. Solving the Markov Decision Process 12. Section 3: Productionizing Machine Learning Applications with TensorFlow.js
13. Deploying Machine Learning Applications 14. Tuning Applications to Achieve High Performance 15. Future Work Around TensorFlow.js 16. Other Books You May Enjoy

Generalizing K-means with the EM algorithm

The EM algorithm is a statistical algorithm that finds the maximum likelihood parameter. Since it supports soft cluster assignment, assuming that the mixed Gaussian distribution that generates samples and data can be assigned to multiple clusters at the same time with some degree of confidence, you will find that the algorithm is a general version of K-means clustering. The mixed Gaussian distribution that's generating the data is a weighted sum of the Gaussian distribution function:

Each Gaussian distribution has as its mean and as its covariance matrix. represents the weight on the kth distribution for this sample. Now, let's introduce a hidden parameter to express the cluster assignment. Here, we will use . z is a one-k-encoded vector. The data is assigned to the kth cluster, the kth element has a z of 1, while the other...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime