Oreilly - Hands - On Reinforcement Learning with Python
by Rudy Lai | Released March 2018 | ISBN: 9781788392402
A practical tour of prediction and control in Reinforcement Learning using OpenAI Gym, Python, and TensorFlowAbout This VideoLearn how to solve Reinforcement Learning problems with a variety of strategies.Use Python, TensorFlow, NumPy, and OpenAI Gym to understand Reinforcement Learning theory.Fast-paced approach to learning about RL concepts, frameworks, and algorithms and implementing models using Reinforcement Learning.In DetailReinforcement learning (RL) is hot! This branch of machine learning powers AlphaGo and Deepmind's Atari AI. It allows programmers to create software agents that learn to take optimal actions to maximize reward, through trying out different strategies in a given environment.This course will take you through all the core concepts in Reinforcement Learning, transforming a theoretical subject into tangible Python coding exercises with the help of OpenAI Gym. The videos will first guide you through the gym environment, solving the CartPole-v0 toy robotics problem, before moving on to coding up and solving a multi-armed bandit problem in Python. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques.The scope of Reinforcement Learning applications outside toy examples is immense. Reinforcement Learning can optimize agricultural yield in IoT powered greenhouses, and reduce power consumption in data centers. It's grown in demand to the point where its applications range from controlling robots to extracting insights from images and natural language data. By the end of this course, you will not only be able to solve these problems but will also be able to use Reinforcement Learning as a problem-solving strategy and use different algorithms to solve these problems.All the code and supporting files for this course are available on Github at - https://github.com/PacktPublishing/Hands-On-Reinforcement-Learning-with-Python- Show and hide more Publisher Resources Download Example Code
- Chapter 1 : Getting Started With Reinforcement Learning Using OpenAI Gym
- The Course Overview 00:03:47
- Understanding Reinforcement Learning Algorithms 00:08:53
- Installing and Setting Up OpenAI Gym 00:03:08
- Running a Visualization of the Cart Robot CartPole-v0 in OpenAI Gym 00:07:29
- Chapter 2 : Lights, Camera, Action – Building Blocks of Reinforcement Learning
- Exploring the Possible Actions of Your CartPole Robot in OpenAI Gym 00:11:46
- Understanding the Environment of CartPole in OpenAI Gym 00:03:28
- Coding up Your First Solution to CartPole-v0 00:15:42
- Chapter 3 : The Multi-Armed Bandit
- Creating a Bandit with 4 Arms Using Python and Numpy 00:09:12
- Creating an Agent to Solve the MAB Problem Using Python and Tensorflow 00:10:11
- Training the Agent, and Understanding What It Learned 00:06:23
- Chapter 4 : The Contextual Bandit
- Creating an Environment with Multiple Bandits Using Python and Numpy 00:10:24
- Creating Your First Policy Gradient Based RL Agent with TensorFlow 00:08:42
- Training the Agent, and Understanding What It Learned 00:09:14
- Chapter 5 : Dynamic Programming – Prediction, Control, and Value Approximation
- Visualizing Dynamic Programming in GridWorld in Your Browser 00:11:40
- Understanding Prediction Through Building a Policy Evaluation Algorithm 00:11:07
- Understanding Control Through Building a Policy Iteration Algorithm 00:11:07
- Building a Value Iteration Algorithm 00:09:45
- Linking It All Together in the Web-Based GridWorld Visualization 00:05:49
- Chapter 6 : Markov Decision Processes and Neural Networks
- Understanding Markov Decision Process and Dynamic Programming in CartPole-v0 00:07:00
- Crafting a Neural Network Using TensorFlow 00:09:33
- Crafting a Neural Network to Predict the Value of Being in Different Environment States 00:08:22
- Training the Agent in CartPole-v0 00:11:45
- Visualizing and Understanding How Your Software Agent Has Performed 00:06:14
- Chapter 7 : Model-Free Prediction and Control With Monte Carlo (MC)
- Running the Blackjack Environment From the OpenAI Gym 00:04:36
- Tallying Every Outcome of an Agent Playing Blackjack Using MC 00:08:58
- Visualizing the Outcomes of a Simple Blackjack Strategy 00:08:22
- Control – Building a Very Simple Epsilon-Greedy Policy 00:08:17
- Visualizing the Outcomes of the Epsilon-Greedy Policy 00:04:59
- Chapter 8 : Model-Free Prediction and Control with Temporal Difference (TD)
- Visualizing TD and SARSA in GridWorld in Your Browser 00:08:13
- Running the GridWorld Environment from the OpenAI Gym 00:08:16
- Building a SARSA Algorithm to Find the Optimal Epsilon-Greedy Policy 00:08:08
- Visualizing the Outcomes of the SARSA 00:07:38
Show and hide more