Minigrid
rlcard
Minigrid | rlcard | |
---|---|---|
8 | 5 | |
2,019 | 2,724 | |
1.0% | 2.8% | |
6.9 | 6.2 | |
28 days ago | 3 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Minigrid
- Environments that require long-term memory and reasoning
-
Best GridWorld environment?
If you want something as simple as possible, I'd go with MiniGrid, and if you want to have a richer world with more complex settings, then MiniHack.
-
Using FastAI to navigate matterport spaces?
This is a pretty hard domain to start with as someone "brand new" to AI. If you're interested in the vision aspect, I'd suggest you start by training a DNN for the CIFAR-10 task. There are plenty of tutorials out there. If you're more interested in the navigation aspect, you could start by training a Q-learning agent to solve some of the simpler problems in gym-minigrid.
-
How to train an agent in custom mini-grid environment using stable baselines3?
Hello guys I tried to build a custom environment using maxicymeb repo
-
What OpenAI Gym environments are your favourite for learning RL algorithms?
For learning and experimentation with RL algorithms, I suggest using a grid world implementation: observations are simple enough (most implementations have a one-hot layered observation) that you do not need deep conv layers to learn complex visual features. You can also make grid worlds as simple or as complex as you like by adding enemies, objects, key-door pairs, changing the size of the grid or decreasing observation radius, etc. There is a reason they are commonly used in research.
- RL environment for hard exploration (infinite) task
-
[R] Are there any paper about reinforcement learning solving mazes?
Take a look at: https://github.com/maximecb/gym-minigrid
rlcard
- [P] Looking for RL or rules-based No-Limit Hold 'Em Work
-
Self play environments
Hi. I’ve decided to do a project to adapt an rl library to support self-play. This is a project so I can teach myself more about building rl systems. I’ve been considering working with the environment system from rlcard https://github.com/datamllab/rlcard/ but wonder if there are other more widely-used self play environment libraries. Thanks.
-
[Project] Making a Poker AI - having trouble with the form of ML to make smart / strong decisions
Can you point me to some active forums for poker bot building? I can only find github repo like https://github.com/datamllab/rlcard, which is mostly reinforcement learning. Whereas SoTA approach like Pluribus is more about game theory.
-
8+ Reinforcement Learning Project Ideas
Build a Poker bot with RLCard
-
What sort of algorithm should I use ? Incomplete information, card game. (Flowchart for reference)
Probably the easiest way for you to get started is to implement your game on an open source RL framework that has working implementations of some basic CFR variations as well as some other self-play algorithms such as NFSP. OpenSpiel and RLCard are two that I am aware of. Depending on the complexity of your game and how strong your agent needs to play, you might be satisfied with the performance you get using by one of these frameworks.
What are some alternatives?
pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines
gym - A toolkit for developing and comparing reinforcement learning algorithms.
MinAtar
open_spiel - OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
mjai-reviewer - 🔍🀄️ Review mahjong game log with mjai-compatible mahjong AI.
gym-super-mario-bros - An OpenAI Gym interface to Super Mario Bros. & Super Mario Bros. 2 (Lost Levels) on The NES
Poker - Fully functional Pokerbot that works on PartyPoker, PokerStars and GGPoker, scraping tables with Open-CV (adaptable via gui) or neural network and making decisions based on a genetic algorithm and montecarlo simulation for poker equity calculation. Binaries can be downloaded with this link:
ma-gym - A collection of multi agent environments based on OpenAI gym.
MonsterHunterPortable3rdHDRemake - Personal fork of a texture upscaling project for PSP's Monster Hunter Portable 3rd
marlgrid - Gridworld for MARL experiments
shengji - An online version of shengji (a.k.a. tractor) and zhaopengyou (a.k.a. Finding Friends)