DI-star
pytorch-lightning
DI-star | pytorch-lightning | |
---|---|---|
9 | 19 | |
1,162 | 19,188 | |
1.2% | - | |
3.3 | 9.9 | |
10 months ago | almost 2 years ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DI-star
-
Better AI ?
There is an AI/bot scene for SC2, I don't have many links but you can start by looking here: https://github.com/opendilab/DI-star https://www.youtube.com/watch?v=fvQF-24IpXs (Harstem and uThermal both have more videos vs different bots).
- [ENG] 2022 GSL S3 Code S RO.20 Group B
-
Any idea about DI-star ? It's an AI model could beat top human players in StarCraft II!
Looks like a simplified AlphaStar using LSTM RNN instead of Pointer Transformer, much heavier supervised imitation learning, Zerg vs Zerg only (with simplified build order module), and a much smaller AlphaStar League: https://github.com/opendilab/DI-star/blob/main/docs/guidance_to_small_scale_training.md
For more information,plz visit out GitHub page:https://github.com/opendilab/DI-star
- Any idea about DI-star?An AI model could beat top human players in StarCraftII
- A large-scale game AI distributed training platform developed for StarCraftII
-
Why can't we make a perfect AI for Starcraft through evolution
First of all, let's discuss what the level of AI is now. If the "level" refers to the capability of competing, the current AI has been very closed to the top human player in some types of games, like chess, Texas Poker, and Mahjong of CARDS, DOTA2 of MOBA, as well as StarCraft2 of RTS. As for other games, if we have enough human resources and computing performance, we also can get similar results. If the "level" has other meanings, like AI agents having human behavior, intelligent NPC can be designed specifically for different people so that they can have different gaming experience. These are all at the stage of issue-defining and exploring new technology solutions. Although traditional game AI is mostly based on hard code, it still has much prior knowledge. In recent years, some hot ML-related techs have performed well in competitiveness while in other fields, they haven't found the perfect entry point. If we expand the conclusions above in detail, the design of game AI can be divided into two parts: issue defining and issue solving. For those competitive issues which have already got complete definitions, their core issue is to explore the optimal strategy based on the evaluation standard, like ladder points. Traditional solutions can deal with less complex scenarios, like chess and Gobang. While machine learning related techs, including deep learning and reinforcement learning, they can perform very well in much more complex games, like StarCraft II. {For this you can try it in DI-star: this project is a reimplementation (with a few improvements) of Alphastar (Only Zerg vs Zerg) based on OpenDILab.}
- Show HN: Come and fight professional AI in StarCraftII
- DI-Star (Starcraft 2 AI, Continuation of AlphaStar)
pytorch-lightning
-
Problem with pytorch lightning and optuna with multiple callbacks
def on_validation_end(self, trainer: Trainer, pl_module: LightningModule) -> None: # Trainer calls `on_validation_end` for sanity check. Therefore, it is necessary to avoid # calling `trial.report` multiple times at epoch 0. For more details, see # https://github.com/PyTorchLightning/pytorch-lightning/issues/1391. if trainer.sanity_checking: return
-
Please comment on my planned research project structure
Under the hood, the ModelWrapper object will create a ML model based on the config (so far, an XGBoost model and a PyTorch Lightning model). Each of those will have a wrapper that conducts training and evaluation (since from my understanding of Lightning, Trainers are required to be outside of the class). In lack of a better name, I call these wrappers Fitters. For uniformity, I thought about adding a common interface IFitter, which is inherited by all model wrappers as outlined below.
-
Watch out for the (PyTorch) Lightning
Join their Slack to ask the community questions and check out the GitHub here.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
Pytorch lightning benchmarks against pytorch on every PR (benchmarks to make sure that it is mot slower.
-
[D] What Repetitive Tasks Related to Machine Learning do You Hate Doing?
There is already a ton of momentum around automating ML workflows. I would suggest you contribute to a preexisting project like, for instance, PyTorch Lightning or fast.ai.
- PyTorch Lightening
-
[D] Are you using PyTorch or TensorFlow going into 2022?
Is the problem the sheer number of options, or the fact that they are all together in one place? Would it be better if they were organized into the different trainer entrypoints (fit, validate, ...)? If that is the case, there was an RFC proposing this which you might find interesting, feel free to drop by and comment on the issue: https://github.com/PyTorchLightning/pytorch-lightning/issues/10444
-
[D] Colab TPU low performance
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
-
[P] An introduction to PyKale https://github.com/pykale/pykale, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly
What are some alternatives?
pytorch-lightning - The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. [Moved to: https://github.com/PyTorchLightning/pytorch-lightning]
mmdetection - OpenMMLab Detection Toolbox and Benchmark
RobustVideoMatting - Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Kornia - Geometric Computer Vision Library for Spatial AI
fastai - The fastai deep learning library
polyaxon - MLOps Tools For Managing & Orchestrating The Machine Learning LifeCycle
composer - Supercharge Your Model Training
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
sparktorch - Train and run Pytorch models on Apache Spark.