Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
R Bioinformatics Cookbook - Second Edition

You're reading from  R Bioinformatics Cookbook - Second Edition

Product type Book
Published in Oct 2023
Publisher Packt
ISBN-13 9781837634279
Pages 396 pages
Edition 2nd Edition
Languages
Author (1):
Dan MacLean Dan MacLean
Profile icon Dan MacLean
Toc

Table of Contents (16) Chapters close

Preface 1. Chapter 1: Setting Up Your R Bioinformatics Working Environment 2. Chapter 2: Loading, Tidying, and Cleaning Data in the tidyverse 3. Chapter 3: ggplot2 and Extensions for Publication Quality Plots 4. Chapter 4: Using Quarto to Make Data-Rich Reports, Presentations, and Websites 5. Chapter 5: Easily Performing Statistical Tests Using Linear Models 6. Chapter 6: Performing Quantitative RNA-seq 7. Chapter 7: Finding Genetic Variants with HTS Data 8. Chapter 8: Searching Gene and Protein Sequences for Domains and Motifs 9. Chapter 9: Phylogenetic Analysis and Visualization 10. Chapter 10: Analyzing Gene Annotations 11. Chapter 11: Machine Learning with mlr3 12. Chapter 12: Functional Programming with purrr and base R 13. Chapter 13: Turbo-Charging Development in R with ChatGPT 14. Index 15. Other Books You May Enjoy

Classifying using random forest and interpreting it with iml

Random forest is a versatile ML algorithm that can be used for both regression and classification tasks. It is an ensemble learning method that combines multiple decision trees to make predictions. Decision trees split the data based on the values of features to create subsets with similar target variable values. Random forest combines multiple decision trees to create a more robust and accurate model. The algorithm randomly selects a subset of the training data (bootstrapping) and a subset of features at each tree’s node to create a diverse set of decision trees. The random subsets of the training data are used to train individual decision trees in the forest. The bootstrapping technique allows each tree to see a slightly different variation of the data, reducing the risk of overfitting.

Random forest assesses feature (variable) importance by evaluating how much each feature contributes to reducing error in the...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}