Integrating custom environments
We can also use Stable Baselines to train an agent in our own environment. While creating our own environment, we need to make sure that our custom environment follows the Gym interface. That is, our environment should include methods such as step
, reset
, render
, and so on.
Suppose the name of our custom environment is CustomEnv
. First, we instantiate our custom environment as follows:
env = CustomEnv()
Next, we can train our agent in the custom environment as usual:
agent = DQN('MlpPolicy', env, learning_rate=1e-3)
agent.learn(total_timesteps=25000)
That's it. In the next section, let's learn how to play Atari games using a DQN and its variants.