Top 5 Open Source Frameworks for Reinforcement Learning

0

The continuous reward and blame system of reinforcement learning has come a long way since its inception. Although this technique has taken time to develop and does not have the easiest application, it is behind some of the most important advances in AI, such as the leading self-driving software in autonomous vehicles and the AI ​​raking in profits in games like poker. Reinforcement learning algorithms like AlphaGo and AlphaZero could excel at a game like Go simply by playing it alone. Despite the challenges of reinforcement learning, it is the method that comes closest to human cognitive learning. Fortunately, alongside the gaming space, which is more competitive and cutting-edge, there are a growing number of Reinforced Learning frameworks that are now publicly available.

OpenGame by DeepMind

DeepMind is one of the most active contributors to open source deep learning stacks. Back in 2019, Alphabet’s DeepMind introduced a game-centric reinforcement learning framework called OpenSpiel. Basically, the framework contains a package of environments and algorithms that can help research in general reinforcement learning, especially in the context of games. OpenSpiel provides tools for searching and planning in games, as well as analyzing learning dynamics and other common assessment metrics.

The framework supports more than 20 single and multi-agent game types, including collaborative zero-sum games, one-shot games, and sequential games. In addition to pure train, auction, matrix, and simultaneous games, these include perfect games (where players are perfectly informed of all events when making decisions) and imperfect information games (where decisions are made simultaneously).

The developers have kept simplicity and minimalism as the main ethos while creating OpenSpiel, which is why it uses reference implementations instead of fully optimized and high-performance codes. The framework also has minimal dependencies and keeps installation needs to a minimum, reducing the likelihood of compatibility issues. The framework is also easy to install and easy to understand and extend.

Source: research paperGame implementations in OpenSpiel

Games in OpenSpiel are written in C++, while some custom RL environments are available in Python.

Open AI’s gym

Source: OpenAI gymExample of a Gym environment for a game

OpenAI developed Gym with the intention of maintaining a toolkit that develops and compares reinforcement learning algorithms. It is a Python library that contains a large number of testbeds, allowing users to write general algorithms and test them on the common interface of Gym’s RL agent algorithms. The gym features special environments laid out in the style of an environmental agent. This means that the framework gives the user access to an agent that can perform specific actions in an environment. Once the action is performed, Gym will receive the observation and reward in response to the action performed.

The environments that Gym offers are: Algorithmic, Atari, classic controls and toy text, 2D and 3D robots. Gym was created to fill the gap that existed in the standardization of environments used in different releases. A small change in the definition of the problem, such as B. the reward or actions, can increase the difficulty level of the task. Also, better benchmarks were needed because the existing open source RL frameworks weren’t diverse enough.

TensorFlow’s TF Agents

TensorFlow’s TF Agents were developed as an open-source infrastructure paradigm to support the creation of parallel RL algorithms on top of TensorFlow. The framework provides various components that correspond to the main parts of an RL problem to help users easily design and implement algorithms.

Instead of making individual observations, the platform simulates two parallel environments and instead performs the neural network computation in a batch. This eliminates the need for manual synchronization and allows the TensorFlow engine to parallelize calculations. All environments within the framework are built using separate Python processes.

ReAgent by Meta AI

Source: MetaAIReAgent’s serving platform

Meta AI released ReAgent in 2019 as a toolkit for building models that can be used to make decisions in real-world situations. The framework, coined by combining the terms “reasoning” and “agents,” is currently used by social media platform Facebook to make millions of decisions every day.

ReAgent is used for three main resources: models that make decisions based on feedback, an offline module to assess the performance of models before they go into production, and a platform that deploys models at scale, collects feedback, and quickly iterates the models.

ReAgent was built on the first open-source, end-to-end RL platform called Horizon, which was intended to optimize systems at scale. While Horizon could only be used in place of existing models in models under development, ReAgent was built as a tiny C++ library and could be embedded into any application.

Fiber by Uber AI

Source: Uber Engineering, How Fiber works on a computer cluster

As machine learning tasks have multiplied, so has the need for computing power. To address this problem, Uber released AI Fiber, a Python-based library that works with computer clusters. Fiber was developed with the original idea of ​​powering large-scale parallel computing projects within Uber itself.

Fiber is comparable to ipyparallel, which is iPython for parallel computing, Spark and the regular Python multiprocessing library. Research conducted by Uber AI showed that Fiber outperformed its alternatives for shorter tasks. In order to run on different types of cluster management systems, Fiber has been divided into three layers: API layer, backend layer and cluster layer.

Fiber is also adept at handling errors in pools. Once a new pool is created, an associated task queue, result queue, and pending table are created. Each new task is added to the queue, which is then shared between master and worker processes. A user grabs a task from the queue and then performs functions within that task. Once a task from the task queue is completed, an entry is added to the pending tasks table.

Share.

Comments are closed.