pysc2
tlol-py
pysc2 | tlol-py | |
---|---|---|
6 | 4 | |
7,915 | 21 | |
0.2% | - | |
3.1 | 7.5 | |
10 months ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pysc2
- Project For Beginners [StarCraft 2 AI]
-
[D] What tool do you use for reinforcement learning experimentation?
Good evening, guys. I currently use StarCraft 2 as a tool for experimenting with my deep reinforcement learning projects, I have also used OpenAI Gym.
-
[D] Which GPU cloud do you use and recommend?
DRL experiments using StarCraft II Learning Environment.
- How A.I. Conquered Poker
-
Tips for a beginner
If you are looking to develop a machine-learning based bot you can go with pysc2: https://github.com/deepmind/pysc2
-
How AI works in big RTS games?
in terms of deepmind: https://github.com/deepmind/pysc2 source code if you want to take a look.
tlol-py
-
Posted by u/Ok-Alps-7918 just now [P] League of Legends Patch 13.3 (and onwards) Reinforcement Learning and Data Analytics Libraries Sorry, this post has been removed by the
Along with this reinforcement learning library, I was considering creating a paid service which would allow people to get the information from League of Legends replay files (*.rofl) and extract it to create data analytics / supervised learning datasets. However, along with the RL library I have open-sourced the data analytics library and it is now open-source and free for all. This library can be found at tlol-py and depends on tlol-scraper.
-
[Discussion] League of Legends Reinforcement Learning Library - Interest
As for D4RL like datasets, I've tried to create and release open-source datasets for League during Season 12, however those datasets lacked scope which made them less useful for any researcher who wanted to use them. I have already created a library, [tlol-py](https://github.com/MiscellaneousStuff/tlol-py), which allows creating League of Legends datasets from .rofl replay files for data analysis and RL / supervised learning tasks. Perhaps if I created a discord and got community feedback, it would be much easier to create these types of datasets as people could say what would or would not be useful for them to contain.
-
[D] What tool do you use for reinforcement learning experimentation?
The other one is a supervised learning / offline reinforcement learning [project](https://github.com/MiscellaneousStuff/tlol-py) which contains the only game playing [dataset](https://github.com/MiscellaneousStuff/tlol) for League of Legends (70 hours of gameplay).
What are some alternatives?
python-sc2 - A StarCraft II bot api client library for Python 3
gym - A toolkit for developing and comparing reinforcement learning algorithms.
lolgym - PyLoL OpenAI Gym Environments for League of Legends v4.20 RL Environment (LoLRLE)
lolgym - PyLoL OpenAI Gym Environments for League of Legends v4.20 RL Environment (LoLRLE)
smac - SMAC: The StarCraft Multi-Agent Challenge
Galaxy-Observer-UI - Toolset to create Observer Interfaces for StarCraft II / Heroes of the Storm. https://ahli.github.io/Galaxy-Observer-UI/#/
s2client-proto - StarCraft II Client - protocol definitions used to communicate with StarCraft II.
pylol - League of Legends v4.20 RL Environment (LoLRLE)
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
mtg - State of the Art Magic: the Gathering Draft and DeckBuilder AI.
dmc2gymnasium - Gymnasium integration for the DeepMind Control (DMC) suite