Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
ROS Robotics Projects,

You're reading from   ROS Robotics Projects, Build and control robots powered by the Robot Operating System, machine learning, and virtual reality

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781838649326
Length 456 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Ramkumar Gandhinathan Ramkumar Gandhinathan
Author Profile Icon Ramkumar Gandhinathan
Ramkumar Gandhinathan
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Getting Started with ROS FREE CHAPTER 2. Introduction to ROS-2 and Its Capabilities 3. Building an Industrial Mobile Manipulator 4. Handling Complex Robot Tasks Using State Machines 5. Building an Industrial Application 6. Multi-Robot Collaboration 7. ROS on Embedded Platforms and Their Control 8. Reinforcement Learning and Robotics 9. Deep Learning Using ROS and TensorFlow 10. Creating a Self-Driving Car Using ROS 11. Teleoperating Robots Using a VR Headset and Leap Motion 12. Face Detection and Tracking Using ROS, OpenCV, and Dynamixel Servos 13. Other Books You May Enjoy

Reinforcement learning algorithms

MDP models can be solved in many ways. One of the methods to solve MDP is using Monte Carlo prediction, which helps in predicting value functions and control methods for further optimization of those value functions. This method is only for time-bound tasks or episodic tasks. The problem with this method is that if the environment is large or the episodes are long, the time taken to optimize the value functions takes a while as well. However, we won't be discussing Monte Carlo methods in this section. Instead, we shall look at a more interesting model-free learning technique that is actually a combination of Monte Carlo methods and dynamic programming. This technique is called temporal difference learning. This learning can be applied to non-episodic tasks as well and doesn't need any model information to be known in advance. Let&apos...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image