Minigrid
gym-super-mario-bros
Minigrid | gym-super-mario-bros | |
---|---|---|
8 | 3 | |
2,010 | 661 | |
0.5% | - | |
6.9 | 0.0 | |
15 days ago | 9 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Minigrid
- Environments that require long-term memory and reasoning
-
Best GridWorld environment?
If you want something as simple as possible, I'd go with MiniGrid, and if you want to have a richer world with more complex settings, then MiniHack.
-
Using FastAI to navigate matterport spaces?
This is a pretty hard domain to start with as someone "brand new" to AI. If you're interested in the vision aspect, I'd suggest you start by training a DNN for the CIFAR-10 task. There are plenty of tutorials out there. If you're more interested in the navigation aspect, you could start by training a Q-learning agent to solve some of the simpler problems in gym-minigrid.
-
How to train an agent in custom mini-grid environment using stable baselines3?
Hello guys I tried to build a custom environment using maxicymeb repo
-
What OpenAI Gym environments are your favourite for learning RL algorithms?
For learning and experimentation with RL algorithms, I suggest using a grid world implementation: observations are simple enough (most implementations have a one-hot layered observation) that you do not need deep conv layers to learn complex visual features. You can also make grid worlds as simple or as complex as you like by adding enemies, objects, key-door pairs, changing the size of the grid or decreasing observation radius, etc. There is a reason they are commonly used in research.
- RL environment for hard exploration (infinite) task
-
[R] Are there any paper about reinforcement learning solving mazes?
Take a look at: https://github.com/maximecb/gym-minigrid
gym-super-mario-bros
-
Is there a single-task, multi-scene environment using continuous action spaces like gym-super-mario-bros?
Is there a single-task, multi-scene environment using continuous action spaces? Single-task and multi-scene envs are similar to gym-super-mario-bros and CoinRun in procgen .But they are all discrete action spaces. Thank you!!!!!
-
Reinforcement learning in Super mario bros
Next we wrapper nes_py.wrappers.JoypadSpace with environmental and actions
-
SNES A.I. Using NEAT
A quick guess shows there is an official Mario environment, so I’ll focus on that instead. Links: https://pypi.org/project/gym-super-mario-bros/ https://github.com/Kautenja/gym-super-mario-bros
What are some alternatives?
pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines
nes-py - A Python3 NES emulator and OpenAI Gym interface
MinAtar
super-mario-neat - This program evolves an AI using the NEAT algorithm to play Super Mario Bros.
rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
rlcard - Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.
ma-gym - A collection of multi agent environments based on OpenAI gym.
gym-super-mario - Gym - 32 levels of original Super Mario Bros
marlgrid - Gridworld for MARL experiments
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
gym - A toolkit for developing and comparing reinforcement learning algorithms.