Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Onnx Alternatives
Similar projects and alternatives to onnx
-
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
txtai
💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
-
-
-
stable-diffusion
Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
-
stable-diffusion-webui
Discontinued Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui] (by hlky)
-
-
-
-
wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
amazon-sagemaker-examples
Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
-
-
stable-diffusion-webui
Discontinued Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui] (by sd-webui)
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
onnx discussion
onnx reviews and mentions
-
Using Google Magika to build an AI-powered file type detector
To perform fast inference at runtime, Magika uses the cross-platform Open Neural Network Exchange (ONNX) runtime. ONNX provides a method to optimize, accelerate, and deploy models built using any of the popular frameworks consistently, even across different hardware platforms or instruction set architectures.
-
Nvidia and Salesforce double down on AI startup Cohere in $450M round
Right; but you can't cross-compile everything. This is really common in AI libraries, especially multi-target projects like ONNX: https://onnx.ai/
The math probably adds up in Google's favor with the TPUs, even if they end up being less efficient and slower per-unit than Nvidia hardware. They don't need to pay for the margins, and they can run them 24/7 for their intended purpose. The previous-generation TPUs can't be reused or resold for other purposes though, and if/when AI blows over as a trend you probably can't easily start mining crypto or doing HPC calculations like an Nvidia cluster would.
-
HuggingFace hacked – Space secrets leak disclosure
> I had assumed model files were big matrices of numbers and some metadata perhaps
ONNX [1] is more or less this, but the challenge you immediately run into is models with custom layers/operators with their own inference logic - you either have to implement those operators in terms of the supported ops (not necessarily practical or viable) or provide the implementation of the operator to the runtime, putting you back at square one.
[1] https://onnx.ai/
- Onyx, a new programming language powered by WebAssembly
-
From Lab to Live: Implementing Open-Source AI Models for Real-Time Unsupervised Anomaly Detection in Images
Once your model has been trained and validated using Anomalib, the next step is to prepare it for real-time implementation. This is where ONNX (Open Neural Network Exchange) or OpenVINO (Open Visual Inference and Neural network Optimization) comes into play.
-
Object detection with ONNX, Pipeless and a YOLO model
ONNX is an open format from the Linux Foundation to represent machine learning models. It is becoming extensively adopted by the Machine Learning community and is compatible with most of the machine learning frameworks like PyTorch, TensorFlow, etc. Converting a model between any of those formats and ONNX is really simple and can be done in most cases with a single command.
-
38TB of data accidentally exposed by Microsoft AI researchers
ONNX[0], model-as-protosbufs, continuing to gain adoption will hopefully solve this issue.
[0] https://github.com/onnx/onnx
-
Reddit’s LLM text model for Ads Safety
Running inference for large models on CPU is not a new problem and fortunately there has been great development in many different optimization frameworks for speeding up matrix and tensor computations on CPU. We explored multiple optimization frameworks and methods to improve latency, namely TorchScript, BetterTransformer and ONNX.
-
Operationalize TensorFlow Models With ML.NET
ONNX is a format for representing machine learning models in a portable way. Additionally, ONNX models can be easily optimized and thus become smaller and faster.
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
I would say onnx.ai [0] provides more information about ONNX for those who aren’t working with ML/DL.
[0] https://onnx.ai
-
A note from our sponsor - InfluxDB
www.influxdata.com | 16 Jun 2024
Stats
onnx/onnx is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of onnx is Python.