pysc2
tlol
pysc2 | tlol | |
---|---|---|
6 | 3 | |
7,915 | 31 | |
0.2% | - | |
3.1 | 5.6 | |
10 months ago | 4 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pysc2
- Project For Beginners [StarCraft 2 AI]
-
[D] What tool do you use for reinforcement learning experimentation?
Good evening, guys. I currently use StarCraft 2 as a tool for experimenting with my deep reinforcement learning projects, I have also used OpenAI Gym.
-
[D] Which GPU cloud do you use and recommend?
DRL experiments using StarCraft II Learning Environment.
- How A.I. Conquered Poker
-
Tips for a beginner
If you are looking to develop a machine-learning based bot you can go with pysc2: https://github.com/deepmind/pysc2
-
How AI works in big RTS games?
in terms of deepmind: https://github.com/deepmind/pysc2 source code if you want to take a look.
tlol
-
[Discussion] League of Legends Reinforcement Learning Library - Interest
I've also released many gameplay datasets for League of Legends during Season 12 here also for supervised learning and RL.
-
[D] What tool do you use for reinforcement learning experimentation?
The other one is a supervised learning / offline reinforcement learning [project](https://github.com/MiscellaneousStuff/tlol-py) which contains the only game playing [dataset](https://github.com/MiscellaneousStuff/tlol) for League of Legends (70 hours of gameplay).
-
League of Legends Patch 11.21 Game Playing AI (Reinforcement Learning, Supervised Learning) Dataset
To download the dataset, go to this GitHub link and click on the Google Drive Link. The dataset is stored as an SQLite database file and the schema should be relatively self-explanatory. Happy to answer any questions.
What are some alternatives?
python-sc2 - A StarCraft II bot api client library for Python 3
lolgym - PyLoL OpenAI Gym Environments for League of Legends v4.20 RL Environment (LoLRLE)
lolgym - PyLoL OpenAI Gym Environments for League of Legends v4.20 RL Environment (LoLRLE)
smac - SMAC: The StarCraft Multi-Agent Challenge
Galaxy-Observer-UI - Toolset to create Observer Interfaces for StarCraft II / Heroes of the Storm. https://ahli.github.io/Galaxy-Observer-UI/#/
gym - A toolkit for developing and comparing reinforcement learning algorithms.
s2client-proto - StarCraft II Client - protocol definitions used to communicate with StarCraft II.
pylol - League of Legends v4.20 RL Environment (LoLRLE)
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
mtg - State of the Art Magic: the Gathering Draft and DeckBuilder AI.
dmc2gymnasium - Gymnasium integration for the DeepMind Control (DMC) suite
megastep - megastep helps you build 1-million FPS reinforcement learning environments on a single GPU