open_spiel
ml-agents
Our great sponsors
open_spiel | ml-agents | |
---|---|---|
44 | 60 | |
3,969 | 16,194 | |
1.4% | 1.5% | |
9.4 | 8.1 | |
3 days ago | 8 days ago | |
C++ | C# | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_spiel
-
Competitive reinforcement learning for turn-based games
Hi, you can check out OpenSpiel: https://github.com/deepmind/open_spiel/
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
This includes single-agent Gymnasium wrappers for DM Control, DM Lab, Behavior Suite, Arcade Learning Environment, OpenAI Gym V21 & V26. Multi-agent PettingZoo wrappers support DM Control Soccer, OpenSpiel and Melting Pot. For more information, read the release notes here:
-
Policy for each of multi-agents in RL
The RL agents in OpenSpiel (https://github.com/deepmind/open_spiel) are designed with this setting as the default (so like, DQN run in Tic-Tac-Toe would have two separate agents learning against each other: one knows how to play as player 1, the other as player 2).
-
My comp sci mentor who is a university student half my age says that learning to use Linux is the optimal comp sci education experience
I maintain a project called OpenSpiel, basically a library/suite of implementations of board games (mainly for AI research but can be used for whatever). It has a C/C++ core, but it also exposes the core API in Python, Rust, Go, and Julia: https://github.com/deepmind/open_spiel/ and a lot of AI algorithms in Python.
-
Looking to get started
If you are looking for some programming game-theoretic algorithms, you can look at Gambit (http://www.gambit-project.org/) or OpenSpiel (https://github.com/deepmind/open_spiel). OpenSpiel has a Julia API too exposing the core and games, but does not have any of the basic game theoretic algorithms in Julia, so that would make a nice exercise (and maybe contribution to the project).
-
[D] Adding a new RL environment to envpool
I looked at the paper before and thought some of the benchmarks in the paper were cherry picked and that the whole let’s write env in compiled languages things has been done a lot already and only works well for games / physics / environments that are extremely well defined . I usually end up back in ray env bc I usually need features like parametric action spaces, or predictive models built inside environments, or historical data. Codingwise I looked at the example file that they had and felt like it wasn’t for me bc I don’t like Bazel ( I use pants for python monorepo ) and the c++ api was overly verbose for me. I don’t feel like I could implement reward shaping or env business logic in a very clear way. I don’t do much c++ dev work though and someone who is a seasoned c++ dev might not care. I liked the openspiel c++ api for env a lot more. https://github.com/deepmind/open_spiel/blob/master/docs/developer_guide.md I have found that building a new env can sometimes be frustrating to debug and I would probably debug a python env before converting to an openspiel/envpool env if I had to use a C++ env for a new problem.
-
Mastering Stratego, the classic game of imperfect information
I 404'ed when I tried to access the source code?
https://github.com/deepmind/open_spiel/tree/master/open_%20s...
Someone needs to create a web front end for this -- I would love to play it.
There's an extra space in the URL to their code (at the end of the article). The correct URL is: https://github.com/deepmind/open_spiel/tree/master/open_spie...
-
Looking for Deepmind implementation of Player of Games
This is a wild guess. I am fairly sure that internally Deepmind uses their own tool, OpenSpiel. The code is kind of dense because it does a lot, but probably most of the functionality that you are looking for is somewhere in there
ml-agents
-
At least I put effort into the AI prompt to generate some code that people can refer to, whereas you do absolutely nothing to contribute to the community.
and PR content: https://github.com/Unity-Technologies/ml-agents/commit/ed212103e451449bf84711a4a8f7bf11dfb1211a
-
TransformerXL + PPO Baseline + MemoryGym
Thanks! It really depends on the task that you want to implement. But in general, sticking to the standard gymnasium API is important. If you want to implement a 2D environment then PyGame is promising. If it's more like a game, check out Unity ML-Agents or Godot RL Agents. Anything simpler can also be just pure python code. You also need to carefully design your observation space, action space and reward function. My advice is to explore design choices of related environments.
-
Impact of using sockets to communicate between Python and RL environment
When looking into implementing RL in a game environment, I found that both Unity MLAgents and the third-party UnrealCV communicate between the game environments and Python using sockets. I am looking into implementing RL for Unreal and wondering about the performance impact of using sockets vs using RL C++ libraries to keep everything "in-engine"/native.
-
After 8 Hours, my ML Agents learned how to work together!
For the last question, I suggest downloading this example package and taking a look at the Soccer example. It shows how to have 2 completely different Agents on different teams learn from each other.
What helped me the most to get started was this youtube video, and then after that I would recommend going through the official unity github examples and their scenes to understand how they approached different tasks.
-
I'm failing to download a repository correctly
# Install steps - download the `ml-agents` repository `git clone https://github.com/Unity-Technologies/ml-agents` - create a Python folder in `ml-agents` and clone `social_rl` repo into it `svn export https://github.com/google-research/google-research/trunk/social_rl` - copy `environments.py` and `gymwrappers.py` into this Python folder - create a python3.8 environment and install `social_rl` requirements `conda create -n mlagents python=3.8` `pip install -r requirements.txt` - install `ml-agents_envs`, `ml-agents` and `gym-unity` from the `ml-agents` repository `python install setup.py`
-
8+ Reinforcement Learning Project Ideas
Unity ML-Agents is a relatively new add-on to the Unity game engine. It allows game developers to train intelligent NPCs for games and enables researchers to create graphics- and physics-rich RL environments. Project ideas to explore include:
-
How to train agents to play volleyball using deep reinforcement learning
Descriptions of the configurations are available in the ML-Agents official documentation.
-
🏐 Ultimate Volleyball: A 3D Volleyball environment built using Unity ML-Agents
Inspired by Slime Volleyball Gym, I built a 3D Volleyball environment using Unity's ML-Agents toolkit. The full project is open-source and available at: 🏐 Ultimate Volleyball.
What are some alternatives?
gym - A toolkit for developing and comparing reinforcement learning algorithms.
muzero-general - MuZero
PettingZoo - An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
AirSim - Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
carla - Open-source simulator for autonomous driving research.
rlcard - Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.
gym-battleship - Battleship environment for reinforcement learning tasks
AssetStudio - AssetStudio is a tool for exploring, extracting and exporting assets and assetbundles.
unity-avatar-generation - A minimal example of how to use Unity's AvatarBuilder.BuildHumanAvatar API.
TexasHoldemSolverJava - A Java implemented Texas holdem and short deck Solver
ultimate-volleyball - 3D RL Volleyball environment built on Unity ML-Agents