Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow 2 Reinforcement Learning Cookbook

You're reading from  TensorFlow 2 Reinforcement Learning Cookbook

Product type Book
Published in Jan 2021
Publisher Packt
ISBN-13 9781838982546
Pages 472 pages
Edition 1st Edition
Languages
Author (1):
Palanisamy P Palanisamy P
Profile icon Palanisamy P
Toc

Table of Contents (11) Chapters close

Preface 1. Chapter 1: Developing Building Blocks for Deep Reinforcement Learning Using Tensorflow 2.x 2. Chapter 2: Implementing Value-Based, Policy-Based, and Actor-Critic Deep RL Algorithms 3. Chapter 3: Implementing Advanced RL Algorithms 4. Chapter 4: Reinforcement Learning in the Real World – Building Cryptocurrency Trading Agents 5. Chapter 5: Reinforcement Learning in the Real World – Building Stock/Share Trading Agents 6. Chapter 6: Reinforcement Learning in the Real World – Building Intelligent Agents to Complete Your To-Dos 7. Chapter 7: Deploying Deep RL Agents to the Cloud 8. Chapter 8: Distributed Training for Accelerated Development of Deep RL Agents 9. Chapter 9: Deploying Deep RL Agents on Multiple Platforms 10. Other Books You May Enjoy

What this book covers

Chapter 1, Developing Building Blocks for Deep Reinforcement Learning Using TensorFlow 2.x, provides recipes for getting started with RL environments, deep neural network-based RL agents, evolutionary neural agents, and other building blocks for both discrete and continuous action-space RL applications.

Chapter 2, Implementing Value-Based Policy Gradients and Actor-Critic Deep RL Algorithms, includes recipes for implementing value iteration-based learning agents and breaks down the implementation of several foundational algorithms in RL, such as Monte-Carlo control, SARSA and Q-learning, actor-critic, and policy gradient algorithms into simple steps.

Chapter 3, Implementing Advanced RL Algorithms, provides concise recipes to implement complete agent training systems using Deep Q-Network (DQN), Double and Dueling Deep Q-Network (DDQN, DDDQN), Deep Recurrent Q-Network (DRQN), Asynchronous Advantage Actor-Critic (A3C), Proximal Policy Optimization (PPO), and Deep Deterministic Policy Gradient (DDPG) RL algorithms.

Chapter 4, RL in the Real World Building Cryptocurrency Trading Agents, shows how to implement and train a soft actor-critic agent in custom RL environments for bitcoin and ether trading using real market data from trading exchanges such as Gemini, containing both tabular and visual (image) state/observation and discrete and continuous action spaces.

Chapter 5, RL in the Real World Building Stock/Share Trading Agents, covers how to train advanced RL agents to trade for profit in the stock market using visual price charts and/or tabular ticket data and more in custom RL environments powered by real stock market exchange data.

Chapter 6, RL in the Real World Building Intelligent Agents to Complete Your To-Dos, provides recipes to build, train, and test vision-based RL agents for completing tasks on the web to help you automate tasks such as clicking on pop-up/confirmation dialogs on web pages, logging into various websites, finding and booking the cheapest flight tickets for your travel, decluttering your email inbox, and like/share/retweeting posts on social media sites to engage with your followers.

Chapter 7, Deploying Deep RL Agents to the Cloud, contains recipes to equip you with tools and details to get ahead of the curve and build cloud-based Simulation-as-a-Service and Agent/Bot-as-a-Service programs using deep RL. Learn how to train RL agents using remote simulators running on the cloud, package runtime components of RL agents, and deploy deep RL agents to the cloud by deploying your own trading bot-as-a-service.

Chapter 8, Distributed Training for the Accelerated Development of Deep RL Agents, contains recipes to speed up deep RL agent development using the distributed training of deep neural network models by leveraging TensorFlow 2.x's capabilities. Learn how to utilize multiple CPUs and GPUs both on a single machine as well as on a cluster of machines to scale up/out your deep RL agent training and also learn how to leverage Ray, Tune, and RLLib for large-scale accelerated training.

Chapter 9, Deploying Deep RL Agents on Multiple Platforms, provides customizable templates that you can utilize for building and deploying your own deep RL applications for your use cases. Learn how to export RL agent models for serving/deployment in various production-ready formats, such as TensorFlow Lite, TensorFlow.js, and ONNX, and learn how to leverage NVIDIA Triton or build your own solution to launch production-ready, RL-based AI services. You will also deploy an RL agent in a mobile and web app and learn how to deploy RL bots in your Node.js applications.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}