paxml
Megatron-LM
paxml | Megatron-LM | |
---|---|---|
3 | 19 | |
403 | 8,805 | |
7.2% | 6.4% | |
9.1 | 9.9 | |
7 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
paxml
Megatron-LM
- FLaNK AI Weekly for 29 April 2024
-
Apple releases CoreNet, a library for training deep neural networks
https://github.com/NVIDIA/Megatron-LM
This is probably a good baseline to start thinking about LLM training at scale.
- Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
This 20B model was trained on the same datasets as its predecessor, aptly named The Pile. Furthermore, the libraries Megatron and DeepSpeed were used to achieve better computing resource utilization, and eventually GPT-NeoX evolved into its own framework for training other LLMs. It was used, for example, as the foundation for Llemma, an open-source model specializing on theorem proving.
- Why async gradient update doesn't get popular in LLM community?
-
[D] Distributes pre-training and fine-tuning
Deepspeed Megatron-LM
-
Why Did Google Brain Exist?
GPU cluster scaling has come a long way. Just checkout the scaling plot here: https://github.com/NVIDIA/Megatron-LM
-
Does Megatron-LM really not communicate during multi-head attention operations?
I found their code that the softmax function conduct all-reduce before they work.
-
I asked ChatGPT to rate the intelligence level of current AI systems out there.
Google's PaLM, Facebook's LLaMA, Nvidia's Megatron, I am missing some surely and Apple sure has something cooking as well but these are the big ones, of course none of them are publicly available, but research papers are reputable. All of the ones mentioned should beat GPT-3 although GPT-3.5 (chatGPT) should be bit better and ability to search (Bing) should level the playing field even further, but Google's PaLM with search functionality should be clearly ahead. This is why people are excited about GPT-4, GPT-3 was way ahead of anyone else when it came out but others were able to catch up since, we'll see if GPT-4 will be another bing jump among LLMs.
-
GPT-4 Will Be 500x Smaller Than People Think - Here Is Why
Found relevant code at https://github.com/nvidia/megatron-lm + all code implementations here
What are some alternatives?
puck - The visual editor for React
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
hands-on-train-and-deploy-ml - Train and Deploy an ML REST API to predict crypto prices, in 10 steps
ColossalAI - Making large AI models cheaper, faster and more accessible
co-tracker - CoTracker is a model for tracking any point (pixel) on a video.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
concrete-ml - Concrete ML: Privacy Preserving ML framework built on top of Concrete, with bindings to traditional ML frameworks.
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
yolov7-object-tracking - YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)