ColossalAI
Megatron-LM

ColossalAI | Megatron-LM | |
---|---|---|
42 | 20 | |
39,061 | 11,355 | |
0.2% | 3.8% | |
9.7 | 9.9 | |
5 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ColossalAI
- FLaNK AI-April 22, 2024
- Making large AI models cheaper, faster and more accessible
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with a RLHF Pipeline
> open-source a complete RLHF pipeline ... based on the LLaMA pre-trained model
I've gotten to where when I see "open source AI" I now know it's "well, except for $some_other_dependencies"
Anyway: https://scribe.rip/@yangyou_berkeley/colossalchat-an-open-so... and https://github.com/hpcaitech/ColossalAI#readme (Apache 2) can save you some medium.com heartache at least
-
Meet ColossalChat: An Open-Source AI Solution For Cloning ChatGPT With A Complete RLHF Pipeline
Quick Read: https://www.marktechpost.com/2023/04/01/meet-colossalchat-an-open-source-ai-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/ Github: https://github.com/hpcaitech/ColossalAI Examples: https://chat.colossalai.org/
-
A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
One of the current methods for training competing models is to have ChatGPT literally create prompt -> completion data sets. That's what was used for https://github.com/hpcaitech/ColossalAI. A model based off of the Llama weights released by facebook, then fine tuned on ChatGPT3.5 prompt + completions. So yes, there is a good chance that google is literally using ChatGPT in the training loop.
- Colossal-AI: open-source RLHF pipeline based on LLaMA pre-trained model
- ColossalChat
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with RLHF Pipeline
Here's the github from the article:
https://github.com/hpcaitech/ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
how can I make my own chatGPT?
Here’s the project on GitHub: https://github.com/hpcaitech/ColossalAI
Megatron-LM
-
Exploring the Exciting Possibilities of NVIDIA Megatron LM: A Fun and Friendly Code Walkthrough with PyTorch & NVIDIA Apex!
# Install necessary dependencies sudo apt update sudo apt install python3-pip # Install PyTorch with GPU support pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 # Clone Megatron LM repository git clone https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM # Install Megatron LM dependencies pip3 install -r requirements.txt # Install NVIDIA Apex for mixed-precision training git clone https://github.com/NVIDIA/apex cd apex pip3 install -v --disable-pip-version-check --no-cache-dir ./
- FLaNK AI Weekly for 29 April 2024
-
Apple releases CoreNet, a library for training deep neural networks
https://github.com/NVIDIA/Megatron-LM
This is probably a good baseline to start thinking about LLM training at scale.
- Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
This 20B model was trained on the same datasets as its predecessor, aptly named The Pile. Furthermore, the libraries Megatron and DeepSpeed were used to achieve better computing resource utilization, and eventually GPT-NeoX evolved into its own framework for training other LLMs. It was used, for example, as the foundation for Llemma, an open-source model specializing on theorem proving.
- Why async gradient update doesn't get popular in LLM community?
-
[D] Distributes pre-training and fine-tuning
Deepspeed Megatron-LM
-
Why Did Google Brain Exist?
GPU cluster scaling has come a long way. Just checkout the scaling plot here: https://github.com/NVIDIA/Megatron-LM
-
Does Megatron-LM really not communicate during multi-head attention operations?
I found their code that the softmax function conduct all-reduce before they work.
-
I asked ChatGPT to rate the intelligence level of current AI systems out there.
Google's PaLM, Facebook's LLaMA, Nvidia's Megatron, I am missing some surely and Apple sure has something cooking as well but these are the big ones, of course none of them are publicly available, but research papers are reputable. All of the ones mentioned should beat GPT-3 although GPT-3.5 (chatGPT) should be bit better and ability to search (Bing) should level the playing field even further, but Google's PaLM with search functionality should be clearly ahead. This is why people are excited about GPT-4, GPT-3 was way ahead of anyone else when it came out but others were able to catch up since, we'll see if GPT-4 will be another bing jump among LLMs.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
DeepFaceLive - Real-time face swap for PC streaming or video calls
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
ivy - Convert Machine Learning Code Between Frameworks
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
fairscale - PyTorch extensions for high performance and large scale training.
Qwen2.5 - Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
ChatGPT-Siri - Shortcuts for Siri using ChatGPT API gpt-3.5-turbo & gpt-4 model, supports continuous conversations, configure the API key & save chat records. 由 ChatGPT API gpt-3.5-turbo & gpt-4 模型驱动的智能 Siri,支持连续对话,配置API key,配置系统prompt,保存聊天记录。
