SaaSHub helps you find the best software and product alternatives Learn more →
Onnxruntime Alternatives
Similar projects and alternatives to onnxruntime
-
-
-
Sonar
Write Clean C++ Code. Always.. Sonar helps you commit clean C++ code every time. With over 550 unique rules to find C++ bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
-
TensorRT
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT (by pytorch)
-
-
-
-
InfluxDB
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
-
zenml
ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
-
AppleNeuralHash2ONNX
Convert Apple NeuralHash model for CSAM Detection to ONNX.
-
-
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
netron
Visualizer for neural network, deep learning, and machine learning models
-
-
TensorRT
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
-
-
-
-
-
-
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
onnxruntime reviews and mentions
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
Are you doing dynamic or static quantization? Static quantization can be tricky, usually dynamic quantization is more straightforward. Also, if you deal with encoder-decoder models, it could be that quantization error accumulates in the decoder. For the slowdowns you are seeing... there could be many reasons. The first thing you should check is whether running through ONNX Runtime / OpenVino is at least on par (if not better) than PyTorch eager. If not, there may be an issue at a higher level (e.g. here). If yes, it could be your CPU does not support AVX VNNI instructions for example. Also depending on batch size, sequence length, the speedups from quantization may greatly vary.
-
[P] Supporting neural network inference in web browsers
There already exist a wide variety of neural network inference engines that run in web browsers (e.g. TensorFlow.js and, my personal favorite for use with PyTorch models, ONNX Runtime Web), but pre- and post-processing has always required imperative manipulations on flat buffers rather than a clean ndarray interface.
-
Question about including parent directory C++ files in Rust crate
So I am working on moving the onnxruntime bindings upstream to https://github.com/microsoft/onnxruntime. The directory structure I have is
-
nadder - NumPy in 8kB of JS, powered by ES6 black magic
Hi there! I've recently been running some PyTorch neural networks in the browser with the help of ONNX Runtime Web, but I was missing NumPy's useful syntax while running my pre- and post-processing. So I decided to explore the magical world of ES6 Proxy to create a fast, small ndarray library with NumPy-like syntax. I basically use proxies to treat the slice notation as a "key" into the ndarray object. I also added a tiny DSL for embedded calculations.
-
YOLOv7 object detection in Ruby in 10 minutes
🔥 ONNX Runtime - the high performance scoring engine for ML models - for Ruby
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
-
Does anyone actually use ML.NET?
Re: ONNX, if you run into similar issues in the future, feel free to reach out in our GitHub repo or the ONNX Runtime repo and we'd be happy to help!
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
Microsoft Onnx Runtime T5 export tool / FastT5: to support caching, it exports 2 times the decoder part, one with cache, and one without (for the first generated token). So the memory footprint is doubled, which makes the solution difficult to use for these large transformer models.
-
💡 What's new in txtai 4.0
txtai supports generating vectors with Hugging Face Transformers, PyTorch, ONNX and Word Vector models.
-
Inference machine learning models in the browser with JavaScript and ONNX Runtime Web
ONNX Runtime GitHub
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007fea602731c8>
www.saashub.com | 3 Feb 2023
Stats
microsoft/onnxruntime is an open source project licensed under MIT License which is an OSI approved license.