Megatron-LM
TensorRT
Megatron-LM | TensorRT | |
---|---|---|
20 | 23 | |
10,726 | 10,905 | |
2.7% | 1.7% | |
9.9 | 6.8 | |
6 days ago | 5 days ago | |
Python | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Megatron-LM
-
Exploring the Exciting Possibilities of NVIDIA Megatron LM: A Fun and Friendly Code Walkthrough with PyTorch & NVIDIA Apex!
# Install necessary dependencies sudo apt update sudo apt install python3-pip # Install PyTorch with GPU support pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 # Clone Megatron LM repository git clone https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM # Install Megatron LM dependencies pip3 install -r requirements.txt # Install NVIDIA Apex for mixed-precision training git clone https://github.com/NVIDIA/apex cd apex pip3 install -v --disable-pip-version-check --no-cache-dir ./
- FLaNK AI Weekly for 29 April 2024
-
Apple releases CoreNet, a library for training deep neural networks
https://github.com/NVIDIA/Megatron-LM
This is probably a good baseline to start thinking about LLM training at scale.
- Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
This 20B model was trained on the same datasets as its predecessor, aptly named The Pile. Furthermore, the libraries Megatron and DeepSpeed were used to achieve better computing resource utilization, and eventually GPT-NeoX evolved into its own framework for training other LLMs. It was used, for example, as the foundation for Llemma, an open-source model specializing on theorem proving.
- Why async gradient update doesn't get popular in LLM community?
-
[D] Distributes pre-training and fine-tuning
Deepspeed Megatron-LM
-
Why Did Google Brain Exist?
GPU cluster scaling has come a long way. Just checkout the scaling plot here: https://github.com/NVIDIA/Megatron-LM
-
Does Megatron-LM really not communicate during multi-head attention operations?
I found their code that the softmax function conduct all-reduce before they work.
-
I asked ChatGPT to rate the intelligence level of current AI systems out there.
Google's PaLM, Facebook's LLaMA, Nvidia's Megatron, I am missing some surely and Apple sure has something cooking as well but these are the big ones, of course none of them are publicly available, but research papers are reputable. All of the ones mentioned should beat GPT-3 although GPT-3.5 (chatGPT) should be bit better and ability to search (Bing) should level the playing field even further, but Google's PaLM with search functionality should be clearly ahead. This is why people are excited about GPT-4, GPT-3 was way ahead of anyone else when it came out but others were able to catch up since, we'll see if GPT-4 will be another bing jump among LLMs.
TensorRT
-
The 6 Best LLM Tools To Run Models Locally
Extensions: Jan supports extensions like TensortRT and Inference Nitro for customizing and enhancing your AI models.
- AMD MI300X 30% higher performance than Nvidia H100, even with optimized stack
-
Getting SDXL-turbo running with tensorRT
(python demo_txt2img.py "a beautiful photograph of Mt. Fuji during cherry blossom"). https://github.com/NVIDIA/TensorRT/tree/release/8.6/demo/Diffusion
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
- https://github.com/NVIDIA/TensorRT
TVM and other compiler-based approaches seem to really perform really well and make supporting different backends really easy. A good friend who's been in this space for a while told me llama.cpp is sort of a "hand crafted" version of what these compilers could output, which I think speaks to the craftmanship Georgi and the ggml team have put into llama.cpp, but also the opportunity to "compile" versions of llama.cpp for other model architectures or platforms.
-
Nvidia Introduces TensorRT-LLM for Accelerating LLM Inference on H100/A100 GPUs
https://github.com/NVIDIA/TensorRT/issues/982
Maybe? Looks like tensorRT does work, but I couldn't find much.
-
Train Your AI Model Once and Deploy on Any Cloud
highly optimized transformer-based encoder and decoder component, supported on pytorch, tensorflow and triton
TensorRT, custom ml framework/ inference runtime from nvidia, https://developer.nvidia.com/tensorrt, but you have to port your models
- A1111 just added support for TensorRT for webui as an extension!
-
WIP - TensorRT accelerated stable diffusion img2img from mobile camera over webrtc + whisper speech to text. Interdimensional cable is here! Code: https://github.com/venetanji/videosd
It uses the nvidia demo code from: https://github.com/NVIDIA/TensorRT/tree/main/demo/Diffusion
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
The traditional way to deploy a model is to export it to Onnx, then to TensorRT plan format. Each step requires its own tooling, its own mental model, and may raise some issues. The most annoying thing is that you need Microsoft or Nvidia support to get the best performances, and sometimes model support takes time. For instance, T5, a model released in 2019, is not yet correctly supported on TensorRT, in particular K/V cache is missing (soon it will be according to TensorRT maintainers, but I wrote the very same thing almost 1 year ago and then 4 months ago so… I don’t know).
-
Speeding up T5
I've tried to speed it up with TensorRT and followed this example: https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/notebooks/t5.ipynb - it does give considerable speedup for batch-size=1 but it does not work with bigger batch sizes, which is useless as I can simply increase the batch-size of HuggingFace model.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ColossalAI - Making large AI models cheaper, faster and more accessible
FasterTransformer - Transformer related optimization, including BERT, GPT
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
ChatGPT-Siri - Shortcuts for Siri using ChatGPT API gpt-3.5-turbo & gpt-4 model, supports continuous conversations, configure the API key & save chat records. 由 ChatGPT API gpt-3.5-turbo & gpt-4 模型驱动的智能 Siri,支持连续对话,配置API key,配置系统prompt,保存聊天记录。
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)
flash-attention - Fast and memory-efficient exact attention