TensorRT
nn
Our great sponsors
TensorRT | nn | |
---|---|---|
5 | 26 | |
2,328 | 48,004 | |
3.2% | 8.5% | |
9.6 | 7.7 | |
7 days ago | about 1 month ago | |
Python | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TensorRT
- Learn TensorRT optimization
- I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
- [P] [D] I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Have you tried the new Torch-TensorRT compiler from NVIDIA?
-
PyTorch 1.10
You can quantize your model to FP16 or Int8 using PTQ as well and it should give you an additional speed up inference wise.
Here is a tutorial[2] to leverage TRTorch.
[1] https://github.com/NVIDIA/TRTorch/tree/master/core
nn
-
Can't remember name of website that has explanations side-by-side with code
Hey are you talking about https://nn.labml.ai/ ?
- [D] Recent ML papers to implement from scratch
-
[P] GPT-NeoX inference with LLM.int8() on 24GB GPU
Implementation & LM Eval Harness Results
-
[P] Fine-tuned the GPT-Neox Model to Generate Quotes
Github: https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/neox
-
Best resources to learn recent transformer papers and stay updated [D]
Regarding implementations this helps me: https://nn.labml.ai/
- Introductory papers to implement
- How to convert research papers to code?
-
[D] How to convert papers to code?
Dunno if this is directly helpful, but this website has implementation with the math side by side https://nn.labml.ai/
- [D] Looking for open source projects to contribute
- Resource for papers explanation
What are some alternatives?
torch2trt - An easy to use PyTorch to TensorRT converter
GFPGAN-for-Video-SR - A colab notebook for video super resolution using GFPGAN
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
cutlass - CUDA Templates for Linear Algebra Subroutines
functorch - functorch is JAX-like composable function transforms for PyTorch.
onnx-simplifier - Simplify your onnx model
ZoeDepth - Metric depth estimation from a single image
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.