SaaSHub helps you find the best software and product alternatives Learn more →
TensorRT Alternatives
Similar projects and alternatives to TensorRT
-
-
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
SonarLint
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
-
TensorRT
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
-
-
transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
-
-
-
InfluxDB
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
-
-
-
-
mlops-course
A project-based course on the foundations of MLOps to responsibly develop, deploy and maintain ML.
-
nn
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
TensorRT reviews and mentions
- Learn TensorRT optimization
- I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
- [P] [D] I made TensorRT example. I hope this will help beginners. And I also have a question about TensorRT best practice.
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Have you tried the new Torch-TensorRT compiler from NVIDIA?
-
PyTorch 1.10
You can quantize your model to FP16 or Int8 using PTQ as well and it should give you an additional speed up inference wise.
Here is a tutorial[2] to leverage TRTorch.
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007fea6017cf08>
www.saashub.com | 7 Feb 2023
Stats
pytorch/TensorRT is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.