lightseq
FasterTransformer
Our great sponsors
lightseq | FasterTransformer | |
---|---|---|
1 | 7 | |
3,080 | 5,436 | |
1.1% | 3.8% | |
3.7 | 4.3 | |
11 months ago | 24 days ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lightseq
FasterTransformer
-
Train Your AI Model Once and Deploy on Any Cloud
https://docs.nvidia.com/ai-enterprise/overview/0.1.0/platfor...
RIVA: NVIDIA® Riva, a premium edition of NVIDIA AI Enterprise software, is a GPU-accelerated speech and translation AI SDK
FasterTransformer: https://github.com/NVIDIA/FasterTransformer an
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
-
Optimized implementation of training/fine-tuning of LLMs [D]
Have anyone tried to optimize the forward and backward using custom Cuda code or fused kernel to speed up the training time of current LLMs? I only have seen FasterTransformer ( NVIDIA/FasterTransformer) and other similar tools but they're only focusing on inference.
-
Exploring Ghostwriter, a GitHub Copilot alternative
Replit built Ghostwriter on the open source scene based on Salesforce’s Codegen, using Nvidia’s FasterTransformer and Triton server for highly optimized decoders, and the knowledge distillation process of the CodeGen model from two billion parameters to a faster model of one billion parameters.
- Why are self attention not as deployment friendly?
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Nvidia FasterTransformer is a mix of Pytorch and CUDA/C++ dedicated code. The performance boost is huge on T5, they report a 10X speedup like TensorRT. However, the speedup is computed on a translation task where sequences are 25 tokens long on average. In our experience, improvement on very short sequences tend to decrease by large margins on longer ones. Still we plan to dig deeper into this project as it implements very interesting ideas.
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
On the other side of the spectrum, there is Nvidia demos (here or there) showing us how to build manually a full Transformer graph (operator by operator) in TensorRT to get best performance from their hardware. It’s out of reach for many NLP practitioners and it’s time consuming to debug/maintain/adapt to a slightly different architecture (I tried). Plus, there is a secret: the very optimized model only works for specific sequence lengths and batch sizes. Truth is that, so far (and it will improve soon), it’s mainly for MLPerf benchmark (the one used to compare DL hardware), marketing content, and very specialized engineers.
What are some alternatives?
accelerate-kullback-liebler
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
cuhnsw - CUDA implementation of Hierarchical Navigable Small World Graph algorithm
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
cuml - cuML - RAPIDS Machine Learning Library
parallelformers - Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit
intel-extension-for-transformers - ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.