Megatron-LM
Ongoing research training transformer models at scale (by NVIDIA)
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. (by deepspeedai)
Megatron-LM | DeepSpeed | |
---|---|---|
20 | 52 | |
12,356 | 38,284 | |
3.2% | 1.6% | |
9.9 | 9.7 | |
2 days ago | 6 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Megatron-LM
Posts with mentions or reviews of Megatron-LM.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-10-25.
-
Exploring the Exciting Possibilities of NVIDIA Megatron LM: A Fun and Friendly Code Walkthrough with PyTorch & NVIDIA Apex!
# Install necessary dependencies sudo apt update sudo apt install python3-pip # Install PyTorch with GPU support pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 # Clone Megatron LM repository git clone https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM # Install Megatron LM dependencies pip3 install -r requirements.txt # Install NVIDIA Apex for mixed-precision training git clone https://github.com/NVIDIA/apex cd apex pip3 install -v --disable-pip-version-check --no-cache-dir ./
- FLaNK AI Weekly for 29 April 2024
-
Apple releases CoreNet, a library for training deep neural networks
https://github.com/NVIDIA/Megatron-LM
This is probably a good baseline to start thinking about LLM training at scale.
- Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
This 20B model was trained on the same datasets as its predecessor, aptly named The Pile. Furthermore, the libraries Megatron and DeepSpeed were used to achieve better computing resource utilization, and eventually GPT-NeoX evolved into its own framework for training other LLMs. It was used, for example, as the foundation for Llemma, an open-source model specializing on theorem proving.
- Why async gradient update doesn't get popular in LLM community?
-
[D] Distributes pre-training and fine-tuning
Deepspeed Megatron-LM
-
Why Did Google Brain Exist?
GPU cluster scaling has come a long way. Just checkout the scaling plot here: https://github.com/NVIDIA/Megatron-LM
-
Does Megatron-LM really not communicate during multi-head attention operations?
I found their code that the softmax function conduct all-reduce before they work.
-
I asked ChatGPT to rate the intelligence level of current AI systems out there.
Google's PaLM, Facebook's LLaMA, Nvidia's Megatron, I am missing some surely and Apple sure has something cooking as well but these are the big ones, of course none of them are publicly available, but research papers are reputable. All of the ones mentioned should beat GPT-3 although GPT-3.5 (chatGPT) should be bit better and ability to search (Bing) should level the playing field even further, but Google's PaLM with search functionality should be clearly ahead. This is why people are excited about GPT-4, GPT-3 was way ahead of anyone else when it came out but others were able to catch up since, we'll see if GPT-4 will be another bing jump among LLMs.
DeepSpeed
Posts with mentions or reviews of DeepSpeed.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-06.
- DeepSpeed-Domino: Communication-Free LLM Training Engine
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
- A comprehensive guide to running Llama 2 locally
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
What are some alternatives?
When comparing Megatron-LM and DeepSpeed you can also consider the following projects:
ColossalAI - Making large AI models cheaper, faster and more accessible
unsloth - Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! 🦥
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support