onnxruntime
cppflow
Our great sponsors
onnxruntime | cppflow | |
---|---|---|
53 | 9 | |
12,583 | 757 | |
4.0% | - | |
10.0 | 0.0 | |
4 days ago | 11 months ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnxruntime
-
AI Inference now available in Supabase Edge Functions
Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.
-
Deep Learning in JavaScript
tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browsdr.
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- ONNX runtime: Cross-platform accelerated machine learning
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
cppflow
- Easily run TensorFlow models from C++
-
[P] libtensorflow_cc: Pre-built TensorFlow C++ API
It’s been awhile since I’ve looked at it, so not sure how hard it would be to get to work. I only commented since you mentioned that you would support other operating systems. For others interested in cross platform support there is also cppflow.
-
Deep learning classification with C++
what about start with keras and convert model to c++ ? https://github.com/pplonski/keras2cpp https://github.com/serizba/cppflow
-
Using embedding model in C++ app
My solution so far: I am using a compiled Tensorflow C DLL in combination with cppflow (https://github.com/serizba/cppflow). However, I get problems when I take models which use operations from the tensorflow_text python module since I don’t know how to get their C++ API.
-
What is the most used library for AI in C++ ?
I use cppflow to run compiled tensorflow models natively in C++. It works like a charm :)
-
[Python] Importing a TensorFlow AI?
I toyed around with this idea a while back but I never got around to finishing the implementation. If all you need is inference with no training and you are relatively familiar with c++ you could look into creating a module for Godot that interfaces with the Tensorflow C API. Something like cppflow would provide an even easier API to work with. Looking into that project could also explain how they interface with the Tensorflow C API if you'd rather cut out the middle man. A module like this would let you train your model in Python and then load it and perform inference in Godot natively.
-
Simplest way to deploy Keras NN model into C++?
If your re using keras with TensorFlow you can save it as a saved model format and then you can easily use cppflow to perform inference with it.
-
I trained a Neural Network to understand my commands when playing my game
The whole game is written in C++ using SFML for the graphics, entt as Entity-Component-System and tensorflow for the Neural Network. Tensorflow itself is written in C, so I use cppflow to integrate it into my C++ framework.
-
TF-agent with C/C++ environment
Found this which seems more recent (uses TF 2, updated 4 days ago): https://github.com/serizba/cppflow
What are some alternatives?
onnx - Open standard for machine learning interoperability
examples - TensorFlow examples
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
qt-tf-lite-example - Qt TensorFlow Lite example
onnx-simplifier - Simplify your onnx model
keras2cpp - This is a bunch of code to port Keras neural network model into pure C++.
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
ssd_keras - A Keras port of Single Shot MultiBox Detector
onnx-tensorflow - Tensorflow Backend for ONNX
emlearn - Machine Learning inference engine for Microcontrollers and Embedded devices
MLflow - Open source platform for the machine learning lifecycle
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.