Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Java Machine Learning

You're reading from   Mastering Java Machine Learning A Java developer's guide to implementing machine learning and big data architectures

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781785880513
Length 556 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Uday Kamath Uday Kamath
Author Profile Icon Uday Kamath
Uday Kamath
Krishna Choppella Krishna Choppella
Author Profile Icon Krishna Choppella
Krishna Choppella
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Machine Learning Review FREE CHAPTER 2. Practical Approach to Real-World Supervised Learning 3. Unsupervised Machine Learning Techniques 4. Semi-Supervised and Active Learning 5. Real-Time Stream Machine Learning 6. Probabilistic Graph Modeling 7. Deep Learning 8. Text Mining and Natural Language Processing 9. Big Data Machine Learning – The Final Frontier A. Linear Algebra B. Probability Index

Assumptions and mathematical notations

There are some key assumptions made by many stream machine learning techniques and we will state them explicitly here:

  • The number of features in the data is fixed.
  • Data has small to medium dimensions, or number of features, typically in the hundreds.
  • The number of examples or training data can be infinite or very large, typically in the millions or billions.
  • The number of class labels in supervised learning or clusters are small and finite, typically less than 10.
  • Normally, there is an upper bound on memory; that is, we cannot fit all the data in memory, so learning from data must take this into account, especially lazy learners such as K-Nearest-Neighbors.
  • Normally, there is an upper bound on the time taken to process the event or the data, typically a few milliseconds.
  • The patterns or the distributions in the data can be evolving over time.
  • Learning algorithms must converge to a solution in finite time.

Let Dt = {xi, yi : y = f(x)} be the given data available...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime