Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Artificial Intelligence for Robotics

You're reading from   Artificial Intelligence for Robotics Build intelligent robots using ROS 2, Python, OpenCV, and AI/ML techniques for real-world tasks

Arrow left icon
Product type Paperback
Published in Mar 2024
Publisher Packt
ISBN-13 9781805129592
Length 344 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Francis X. Govers III Francis X. Govers III
Author Profile Icon Francis X. Govers III
Francis X. Govers III
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1: Building Blocks for Robotics and Artificial Intelligence
2. Chapter 1: The Foundation of Robotics and Artificial Intelligence FREE CHAPTER 3. Chapter 2: Setting Up Your Robot 4. Chapter 3: Conceptualizing the Practical Robot Design Process 5. Part 2: Adding Perception, Learning, and Interaction to Robotics
6. Chapter 4: Recognizing Objects Using Neural Networks and Supervised Learning 7. Chapter 5: Picking Up and Putting Away Toys using Reinforcement Learning and Genetic Algorithms 8. Chapter 6: Teaching a Robot to Listen 9. Part 3: Advanced Concepts – Navigation, Manipulation, Emotions, and More
10. Chapter 7: Teaching the Robot to Navigate and Avoid Stairs 11. Chapter 8: Putting Things Away 12. Chapter 9: Giving the Robot an Artificial Personality 13. Chapter 10: Conclusions and Reflections 14. Answers 15. Index 16. Other Books You May Enjoy Appendix

Summary

Our task for this chapter was to use machine learning to teach the robot how to use its robot arm. We used two techniques with some variations. We used a variety of reinforcement learning techniques, or Q-learning, to develop a movement path by selecting individual actions based on the robot’s arm state. Each motion was scored individually as a reward, and as part of the overall path as a value. The process stored the results of the learning in a Q-matrix that could be used to generate a path. We improved our first cut of the reinforcement learning program by indexing, or encoding, the motions from a 27-element array of possible combinations of motors as numbers from 0 to 26, and likewise indexing the robot state to a state lookup table. This resulted in a 40x speedup of the learning process. Our Q-learning approach struggled with the large number of states that the robot arm could be in.

Our second technique was a GA. We created individual random paths to make a...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime