Megatron-LM
DeepLearningExamples
Megatron-LM | DeepLearningExamples | |
---|---|---|
20 | 7 | |
10,726 | 13,636 | |
2.7% | 1.0% | |
9.9 | 4.0 | |
5 days ago | 4 months ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Megatron-LM
-
Exploring the Exciting Possibilities of NVIDIA Megatron LM: A Fun and Friendly Code Walkthrough with PyTorch & NVIDIA Apex!
# Install necessary dependencies sudo apt update sudo apt install python3-pip # Install PyTorch with GPU support pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 # Clone Megatron LM repository git clone https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM # Install Megatron LM dependencies pip3 install -r requirements.txt # Install NVIDIA Apex for mixed-precision training git clone https://github.com/NVIDIA/apex cd apex pip3 install -v --disable-pip-version-check --no-cache-dir ./
- FLaNK AI Weekly for 29 April 2024
-
Apple releases CoreNet, a library for training deep neural networks
https://github.com/NVIDIA/Megatron-LM
This is probably a good baseline to start thinking about LLM training at scale.
- Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
This 20B model was trained on the same datasets as its predecessor, aptly named The Pile. Furthermore, the libraries Megatron and DeepSpeed were used to achieve better computing resource utilization, and eventually GPT-NeoX evolved into its own framework for training other LLMs. It was used, for example, as the foundation for Llemma, an open-source model specializing on theorem proving.
- Why async gradient update doesn't get popular in LLM community?
-
[D] Distributes pre-training and fine-tuning
Deepspeed Megatron-LM
-
Why Did Google Brain Exist?
GPU cluster scaling has come a long way. Just checkout the scaling plot here: https://github.com/NVIDIA/Megatron-LM
-
Does Megatron-LM really not communicate during multi-head attention operations?
I found their code that the softmax function conduct all-reduce before they work.
-
I asked ChatGPT to rate the intelligence level of current AI systems out there.
Google's PaLM, Facebook's LLaMA, Nvidia's Megatron, I am missing some surely and Apple sure has something cooking as well but these are the big ones, of course none of them are publicly available, but research papers are reputable. All of the ones mentioned should beat GPT-3 although GPT-3.5 (chatGPT) should be bit better and ability to search (Bing) should level the playing field even further, but Google's PaLM with search functionality should be clearly ahead. This is why people are excited about GPT-4, GPT-3 was way ahead of anyone else when it came out but others were able to catch up since, we'll see if GPT-4 will be another bing jump among LLMs.
DeepLearningExamples
-
A small example from Tacotron2 trained on Brandon "Atrioc" Ewing
GitHub Used: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
- Retraining Single Shot MultiBox Detector model on a custom data set?
-
Nvidia Scientists Take Top Spots in 2021 Brain Tumor Segmentation Challenge
Disclosure: I used to work on Google Cloud.
I dunno, their A100 results took about 20-30 minutes on 8 x A100s [1]. 8xA100s is like $24/hr on GCP at on-demand rates.
The efficiency was okay but not linear, so if you were more cost constrained you might go with 1xA100 for $3/hr and have ~2.5hr training times.
Getting that performance out of a GPU is more challenging than getting access to the GPUs. All the major cloud providers offer them.
(Nit: GCP deployed the 40 GiB cards rather than the later 80 GiB parts, but let's ignore that).
but it often doesn't matter
[1] https://github.com/NVIDIA/DeepLearningExamples/tree/master/P...
-
Tacotron2 CPU Inferencing
Entrypoint.py file in tacotron2 folder: source code
-
Skyrim Voice Synthesis Mega Tutorial
For those asking about differences to xVASynth, the models trained with xVASynth are the FastPitch models (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch). As a quick explainer:
-
Modders develop AI based app for creating new voice lines using neural speech synthesis.
There's another separate tool set from Nvidia that's on GitHub that the creator used to train the models. I'm not going to pretend like I understand it, but you can find it here.
-
[R] Data Movement Is All You Need: A Case Study on Optimizing Transformers
The Nvidia's implementation of BERT has a long way to go (I don't know about the implementations of input independent gradient computations in their backprop). But, there are scaled benchmarks on DGX A100's -https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ontogpt - LLM-based ontological extraction tools, including SPIRES
ColossalAI - Making large AI models cheaper, faster and more accessible
lidar-harmonization - Code release for Intensity Harmonization for Airborne LiDAR
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
alpaca_eval - An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
llm-search - Querying local documents, powered by LLM
ChatGPT-Siri - Shortcuts for Siri using ChatGPT API gpt-3.5-turbo & gpt-4 model, supports continuous conversations, configure the API key & save chat records. 由 ChatGPT API gpt-3.5-turbo & gpt-4 模型驱动的智能 Siri,支持连续对话,配置API key,配置系统prompt,保存聊天记录。
notebooks - Notebooks illustrating the use of Norse, a library for deep-learning with spiking neural networks.
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)
deep_navigation - Deep Learning based wall/corridor following P3AT robot (ROS, Tensorflow 2.0)