Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Machine Learning with R, Second Edition - Second Edition

You're reading from  Mastering Machine Learning with R, Second Edition - Second Edition

Product type Book
Published in Apr 2017
Publisher Packt
ISBN-13 9781787287471
Pages 420 pages
Edition 2nd Edition
Languages
Toc

Table of Contents (23) Chapters close

Title Page
Credits
About the Author
About the Reviewers
Packt Upsell
Customer Feedback
Preface
1. A Process for Success 2. Linear Regression - The Blocking and Tackling of Machine Learning 3. Logistic Regression and Discriminant Analysis 4. Advanced Feature Selection in Linear Models 5. More Classification Techniques - K-Nearest Neighbors and Support Vector Machines 6. Classification and Regression Trees 7. Neural Networks and Deep Learning 8. Cluster Analysis 9. Principal Components Analysis 10. Market Basket Analysis, Recommendation Engines, and Sequential Analysis 11. Creating Ensembles and Multiclass Classification 12. Time Series and Causality 13. Text Mining 14. R on the Cloud 15. R Fundamentals 16. Sources

Univariate linear regression


We begin by looking at a simple way to predict a quantitative response, Y, with one predictor variable, x, assuming that Y has a linear relationship with x. The model for this can be written as, Y = B0 + B1x + e. We can state it as the expected value of Y being a function of the parameters B0 (the intercept) plus B1 (the slope) times x, plus an error term e. The least squares approach chooses the model parameters that minimize the Residual Sum of Squares (RSS) of the predicted y values versus the actual Y values. For a simple example, let's say we have the actual values of Y1 and Y2 equal to 10 and 20 respectively, along with the predictions of y1 and y2 as 12 and 18. To calculate RSS, we add the squared differences RSS = (Y1 - y1)2 + (Y2 - y2)2, which, with simple substitution, yields (10 - 12)2 + (20 - 18)2 = 8.

I once remarked to a peer during our Lean Six Sigma Black Belt training that it's all about the sum of squares; understand the sum of squares and the...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime