ColossalAI
fairscale


ColossalAI | fairscale | |
---|---|---|
42 | 6 | |
39,061 | 3,255 | |
0.2% | 1.3% | |
9.7 | 3.2 | |
5 days ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ColossalAI
- FLaNK AI-April 22, 2024
- Making large AI models cheaper, faster and more accessible
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with a RLHF Pipeline
> open-source a complete RLHF pipeline ... based on the LLaMA pre-trained model
I've gotten to where when I see "open source AI" I now know it's "well, except for $some_other_dependencies"
Anyway: https://scribe.rip/@yangyou_berkeley/colossalchat-an-open-so... and https://github.com/hpcaitech/ColossalAI#readme (Apache 2) can save you some medium.com heartache at least
-
Meet ColossalChat: An Open-Source AI Solution For Cloning ChatGPT With A Complete RLHF Pipeline
Quick Read: https://www.marktechpost.com/2023/04/01/meet-colossalchat-an-open-source-ai-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/ Github: https://github.com/hpcaitech/ColossalAI Examples: https://chat.colossalai.org/
-
A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
One of the current methods for training competing models is to have ChatGPT literally create prompt -> completion data sets. That's what was used for https://github.com/hpcaitech/ColossalAI. A model based off of the Llama weights released by facebook, then fine tuned on ChatGPT3.5 prompt + completions. So yes, there is a good chance that google is literally using ChatGPT in the training loop.
- Colossal-AI: open-source RLHF pipeline based on LLaMA pre-trained model
- ColossalChat
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with RLHF Pipeline
Here's the github from the article:
https://github.com/hpcaitech/ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
how can I make my own chatGPT?
Here’s the project on GitHub: https://github.com/hpcaitech/ColossalAI
fairscale
-
[R] TorchScale: Transformers at Scale - Microsoft 2022 Shuming Ma et al - Improves modeling generality and capability, as well as training stability and efficiency.
I skimmed through the README and paper. What does this library have that that hasn't been included in xformers or fairscale?
-
[D] DeepSpeed vs PyTorch native API
Things are slowly moving into PyTorch upstream such as the ZeRO redundancy optimizer but from my experience the team behind DeepSpeed just move faster. There is also fairscale from the FAIR team which seems to be a staging ground for experimental optimizations before they move into PyTorch. If you use Lightning, it's easy enough to try out these various libraries (docs here)
-
How to Train Large Models on Many GPUs?
DeepSpeed [1] is amazing tool to enable the different kind of parallelisms and optimizations on your model. I would definitely not recommend reimplementing everything yourself.
Probably FairScale [2] too, but never tried it myself.
[1]: https://github.com/microsoft/DeepSpeed
[2]: https://github.com/facebookresearch/fairscale
-
[P] PyTorch Lightning Multi-GPU Training Visualization using minGPT, from 250 Million to 4+ Billion Parameters
It was helpful for me to see how DeepSpeed/FairScale stack up compared to vanilla PyTorch Distributed Training specifically when trying to reach larger parameter sizes, visualizing the trade off with throughput. A lot of the learnings ended up in the Lightning Documentation under the advanced GPU docs!
-
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
I created a feature request on the FairScale project so that we can track the progress on the integration: Support ZeRO-Offload · Issue #337 · facebookresearch/fairscale (github.com)
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Megatron-LM - Ongoing research training transformer models at scale
Megatron-DeepSpeed - Ongoing research training transformer language models at scale, including: BERT & GPT-2
DeepFaceLive - Real-time face swap for PC streaming or video calls
torchscale - Foundation Architecture for (M)LLMs
ivy - Convert Machine Learning Code Between Frameworks
pytorch-lightning - Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

