ort VS Graphite

Compare ort vs Graphite and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ort Graphite
7 46
555 6,831
14.6% 22.1%
9.3 9.6
9 days ago 7 days ago
Rust Rust
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ort

Posts with mentions or reviews of ort. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-16.
  • AI Inference now available in Supabase Edge Functions
    4 projects | dev.to | 16 Apr 2024
    To solve this, we built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper called Ort:
  • AI Inference Now Available in Supabase Edge Functions
    1 project | news.ycombinator.com | 16 Apr 2024
    hey hn, supabase ceo here

    As the post points out, this comes in 2 parts:

    1. Embeddings models for RAG workloads (specifically pgvector). Available today.

    2. Large Language Models for GenAI workloads. This will be progressively rolled out as we get our hands on more GPUs.

    We've always had a focus on architectures that can run anywhere (especially important for local dev and self-hosting). In that light, we've found that the Ollama[0] tooling is really unbeatable. I heard one of our engineers explain it like "docker for models" which I think is apt.

    To support models that work best with GPUs, we're running them with Fly GPUs - pretty much this: https://fly.io/blog/scaling-llm-ollama (and then we stitch a native API around it). The plan is that you will be able to "BYO" model server and point the Edge Runtime towards it using simple env vars / config.

    We've also made improvements for CPU models. We built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper, Ort[1]. We have the models stored on disk, so there is no downloading, cold-boot, etc.

    The thing I most like about this set up is that you can now use Edge Functions like background workers for your Postgres database, offloading heavy compute for generating embeddings. For example, you can trigger the worker when a user inserts some text, and then the worker will asynchronously create the embedding and store it back into your database.

    I'll be around if there are any questions.

    [0] ollama.com

    [1] Ort: https://github.com/pykeio/ort

  • Moving from Typescript and Langchain to Rust and Loops
    9 projects | dev.to | 7 Sep 2023
    In the quest for more efficient solutions, the ONNX runtime emerged as a beacon of performance. The decision to transition from Typescript to Rust was an unconventional yet pivotal one. Driven by Rust's robust parallel processing capabilities using Rayon and seamless integration with ONNX through the ort crate, Repo-Query unlocked a realm of unparalleled efficiency. The result? A transformation from sluggish processing to, I have to say it, blazing-fast performance.
  • How to create YOLOv8-based object detection web service using Python, Julia, Node.js, JavaScript, Go and Rust
    19 projects | dev.to | 13 May 2023
    ort - ONNX runtime library.
  • Do you use Rust in your professional career?
    6 projects | /r/rust | 9 May 2023
    Our main model in Rust is a deep neural network, using ONNX via the ort rust bindings. The application is some particular applications of process automation.
  • onnxruntime
    4 projects | /r/rust | 22 Feb 2023
    You could try ort https://github.com/pykeio/ort It looks like it's in active development and supports GPU inference
  • Deep Learning in Rust: Burn 0.4.0 released and plans for 2023
    6 projects | /r/rust | 2 Jan 2023
    I would't try to distribute your ml models with the typical frameworks, especially not with python. Have you looked in to ONNX?For example: https://github.com/pykeio/ort

Graphite

Posts with mentions or reviews of Graphite. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-02.

What are some alternatives?

When comparing ort and Graphite you can also consider the following projects:

onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)

egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native

yolov8_onnx_go - YOLOv8 Inference using Go

Method-Draw - Method Draw, the SVG Editor for Method of Action

onnxruntime-php - Run ONNX models in PHP

GimelStudio - Non-destructive, node based 2D image editor with an API for custom nodes

yolov8_onnx_javascript - YOLOv8 inference using Javascript

Gimel-Studio - Old repo of the node-based image editor. See https://github.com/GimelStudio/GimelStudio for the next generation of Gimel Studio :rocket:

langchainjs - πŸ¦œπŸ”— Build context-aware reasoning applications πŸ¦œπŸ”—

bevy - A refreshingly simple data-driven game engine built in Rust

yolov8_onnx_julia - YOLOv8 inference using Julia

burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. [Moved to: https://github.com/Tracel-AI/burn]