Based on the modified version of Gameplay Football, the Football engine simulates a football match including fouls, goals, corner and penalty kicks, and offsides. The engine is programmed in C++, which allows it to run with GPU as well as without GPU-based rendering enabled.
The engine allows learning from different state representations that contain semantic information such as the player’s locations and learning from raw pixels. The engine can be run in both stochastic mode as well as deterministic mode for investigating the impact of randomness. The engine is also compatible with OpenAI Gym API.
Read Also: Create your first OpenAI Gym environment [Tutorial]
The researchers propose a set of benchmark problems for RL research based on the Football Engine with the help of Football Benchmarks. These benchmarks highlight the goals such as playing a “standard” game of football against a fixed rule-based opponent. The researchers have provided three versions, the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which differ only in the strength of the opponent.
They also provide benchmark results for two state-of-the-art reinforcement learning algorithms including DQN and IMPALA that can be run in multiple processes on a single machine or concurrently on many machines.
Image Source: Google’s blog post
These results indicate that the Football Benchmarks are research problems that vary in difficulties. According to the researchers, the Football Easy Benchmark is suitable for research on single-machine algorithms and Football Hard Benchmark is challenging for massively distributed RL algorithms.
Football Academy is a diverse set of scenarios of varying difficulty that allow researchers to look into new research ideas and allow testing of high-level concepts. It also provides a foundation for investigating curriculum learning, research ideas, where agents can learn harder scenarios.
The official blog post states, “Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.”
Users are giving mixed reaction to this news as some find nothing new in Google Research Football Environment.
A user commented on HackerNews, “I guess I don't get it... What does this game have that SC2/Dota doesn't? As far as I can tell, the main goal for reinforcement learning is to make it so that it doesn't take 10k learning sessions to learn what a human can learn in a single session, and to make self-training without guiding scenarios feasible.”
Another user commented, “This doesn't seem that impressive: much more complex games run at that frame rate? FIFA games from the 90s don't look much worse and certainly achieved those frame rates on much older hardware.”
While a few others think that they can learn a lot from this environment. Another comment reads, “In other words, you can perform different kinds of experiments and learn different things by studying this environment.”
Here’s a short YouTube video demonstrating Google Research Football.
https://youtu.be/F8DcgFDT9sc
To know more about this news, check out Google’s blog post.
Google researchers propose building service robots with reinforcement learning to help people with mobility impairment
Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3
Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias