Gymnasium
popgym
Gymnasium | popgym | |
---|---|---|
12 | 4 | |
5,859 | 147 | |
6.8% | 8.8% | |
9.3 | 6.1 | |
12 days ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Gymnasium
-
NASA JPL Open Source Rover That Runs ROS 2
"Show HN: Ghidra Plays Mario" (2023) https://news.ycombinator.com/item?id=37475761 :
[RL, MuZero reduxxxx ]
> Farama-Foundation/Gymnasium is a fork of OpenAI/gym and it has support for additional Environments like MuJoCo: https://github.com/Farama-Foundation/Gymnasium#environments
> Farama-Foundatiom/MO-Gymnasiun: "Multi-objective Gymnasium environments for reinforcement learning": https://github.com/Farama-Foundation/MO-Gymnasium
-
Show HN: Ghidra Plays Mario
https://github.com/Farama-Foundation/Gymnasium#environments
Farama-Foundatiom/MO-Gymnasiun:
-
Are there any AI projects that plays a game for you and learns?
https://github.com/Farama-Foundation/Gymnasium - A framework Python library to build and train your own AI to play games
-
Unstable SAC training of sparse-reward task
The only change in the environment from the one here is the reward function which is given its return value using the following code snippet (replacing lines 648-672 in the above url):
-
Any resources on experiments simulated environments?
This may be useful: https://github.com/Farama-Foundation/Gymnasium
-
What's the most challenging Gym environment?
Here are all the environments. So for example, if instead of Hopper-v2 you want the acrobat environment from classic control you can write: env = gym.make('Acrobot-v1')
-
Gymnasium 0.28 is now released
This release also includes a large number of documentation updates, minor bug fixes, and other minor improvements; the full release notes are available here if you’d like to learn more: https://github.com/Farama-Foundation/Gymnasium/releases/tag/v0.28.0.
-
TransformerXL + PPO Baseline + MemoryGym
Thanks! It really depends on the task that you want to implement. But in general, sticking to the standard gymnasium API is important. If you want to implement a 2D environment then PyGame is promising. If it's more like a game, check out Unity ML-Agents or Godot RL Agents. Anything simpler can also be just pure python code. You also need to carefully design your observation space, action space and reward function. My advice is to explore design choices of related environments.
- Gymnasium 0.27 - the first new version since Gymnasium was announced - is now released. It has almost no breaking changes.
-
[N] Gymnasium 0.27 - the first new version since Gymnasium was announced - is now released. It has almost no breaking changes.
You can read the release notes here: https://github.com/Farama-Foundation/Gymnasium/releases/tag/v0.27.0. You can upgrade from 0.26 without any changes unless you're doing something very uncommon; this is how releases will generally be going forward.
popgym
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
POPGym is based on RLlib and has two linear transformers and five or six RNN variants, including LSTM. I've found that transformers tend to perform pretty poorly in RL when compared to RNNs.
-
POPGym: Partially Observable Reinforcement Learning
Code: https://github.com/proroklab/popgym
-
TransformerXL + PPO Baseline + MemoryGym
Have you seen this other ICLR paper, POPGym? Paper: https://openreview.net/forum?id=chDrutUTs0K Code: https://github.com/smorad/popgym
-
Partially observable Continuous Control Gym Environment
https://github.com/smorad/popgym contains 15 partially observable gym environments, but they use discrete actino spaces. I've verified that memoryless models (e.g. PPO+MLP) cannot solve these tasks, except for the navigation ones.
What are some alternatives?
flake8 - The official GitHub mirror of https://gitlab.com/pycqa/flake8
recurrent-ppo-truncated-bptt - Baseline implementation of recurrent PPO using truncated BPTT
Flake8-pyproject - Flake8 plug-in loading the configuration from pyproject.toml
brain-agent - Brain Agent for Large-Scale and Multi-Task Agent Learning
ruff - An extremely fast Python linter and code formatter, written in Rust.
episodic-transformer-memory-ppo - Clean baseline implementation of PPO using an episodic TransformerXL memory
agents - TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
Visual Studio Code - Visual Studio Code
adaptive-transformers-in-rl - Adaptive Attention Span for Reinforcement Learning
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.