FasterTransformer
transformers
Our great sponsors
FasterTransformer | transformers | |
---|---|---|
7 | 175 | |
5,456 | 125,021 | |
4.2% | 3.1% | |
4.3 | 10.0 | |
about 1 month ago | 4 days ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FasterTransformer
-
Train Your AI Model Once and Deploy on Any Cloud
https://docs.nvidia.com/ai-enterprise/overview/0.1.0/platfor...
RIVA: NVIDIA® Riva, a premium edition of NVIDIA AI Enterprise software, is a GPU-accelerated speech and translation AI SDK
FasterTransformer: https://github.com/NVIDIA/FasterTransformer an
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
-
Optimized implementation of training/fine-tuning of LLMs [D]
Have anyone tried to optimize the forward and backward using custom Cuda code or fused kernel to speed up the training time of current LLMs? I only have seen FasterTransformer ( NVIDIA/FasterTransformer) and other similar tools but they're only focusing on inference.
-
Exploring Ghostwriter, a GitHub Copilot alternative
Replit built Ghostwriter on the open source scene based on Salesforce’s Codegen, using Nvidia’s FasterTransformer and Triton server for highly optimized decoders, and the knowledge distillation process of the CodeGen model from two billion parameters to a faster model of one billion parameters.
- Why are self attention not as deployment friendly?
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Nvidia FasterTransformer is a mix of Pytorch and CUDA/C++ dedicated code. The performance boost is huge on T5, they report a 10X speedup like TensorRT. However, the speedup is computed on a translation task where sequences are 25 tokens long on average. In our experience, improvement on very short sequences tend to decrease by large margins on longer ones. Still we plan to dig deeper into this project as it implements very interesting ideas.
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
On the other side of the spectrum, there is Nvidia demos (here or there) showing us how to build manually a full Transformer graph (operator by operator) in TensorRT to get best performance from their hardware. It’s out of reach for many NLP practitioners and it’s time consuming to debug/maintain/adapt to a slightly different architecture (I tried). Plus, there is a secret: the very optimized model only works for specific sequence lengths and batch sizes. Truth is that, so far (and it will improve soon), it’s mainly for MLPerf benchmark (the one used to compare DL hardware), marketing content, and very specialized engineers.
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantđź’ˇ
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
llama - Inference code for Llama models
parallelformers - Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
huggingface_hub - The official Python client for the Huggingface Hub.