Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 13 C++ Onnx Projects
-
ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FastDeploy
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
-
OnnxStream
Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
deepC
vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
-
vs-mlrt
Efficient CPU/GPU/Vulkan ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE, SCUNet and more!)
-
Onnx2Text
Converts an ONNX ML model protobuf from/to text, or tensor from/to text/CSV/raw data. (Windows command line tool)
Project mention: AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source | news.ycombinator.com | 2024-02-12ncnn uses Vulkan for GPU acceleration, I've seen it used in a few projects to get AMD hardware support.
https://github.com/Tencent/ncnn
Starting from version 1.5.1 the backend integrates changes borrowed from sam_onnx_full_export, to support OnnxRuntime 1.17.x and later versions. Please note that on MacOS directly running the project from the command line suffers from memory leaks, making inference operations slower than normal. It's best therefore running the project inside a docker container, unless in case of development or debugging activities.
Project mention: Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support | news.ycombinator.com | 2024-01-14
Project mention: Stable Diffusion implemented by ncnn framework based on C++, supported txt2img and img2img! | /r/StableDiffusion | 2023-06-08
If you happen to start with an ONNX model that you still want to optimize, then you can use the official ONNX optimizer tool https://github.com/onnx/optimizer.
Project mention: [D] Run Pytorch model inference on Microcontroller | /r/MachineLearning | 2023-11-14DeepC. Open source version of DeepSea. Very little activity, looks abandoned
Project mention: I ported Stable Diffusion onto Xbox Series X and S. | /r/StableDiffusion | 2023-06-10Here are the details: Running Unpaint on the Xbox Series consoles · axodox/unpaint Wiki (github.com)
or whatever you want, you need to write the code yourself though. https://github.com/AmusementClub/vs-mlrt
C++ Onnx related posts
-
New exponent functions that make SiLU and SoftMax 2x faster, at full acc
-
Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
-
ONNX runtime: Cross-platform accelerated machine learning
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
-
Running Stable Diffusion in 260MB of RAM
-
A note from our sponsor - InfluxDB
www.influxdata.com | 1 Jun 2024
Index
What are some of the best open-source Onnx projects in C++? This list will help you:
Project | Stars | |
---|---|---|
1 | ncnn | 19,460 |
2 | onnxruntime | 13,030 |
3 | onnx-simplifier | 3,605 |
4 | onnx-tensorrt | 2,785 |
5 | FastDeploy | 2,771 |
6 | OnnxStream | 1,763 |
7 | hls4ml | 1,128 |
8 | Stable-Diffusion-NCNN | 944 |
9 | optimizer | 607 |
10 | deepC | 526 |
11 | unpaint | 260 |
12 | vs-mlrt | 242 |
13 | Onnx2Text | 15 |
Sponsored