AI Inference now available in Supabase Edge Functions

This page summarizes the projects mentioned and recommended in the original post on dev.to

CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai
featured
Nutrient - The #1 PDF SDK Library
Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
nutrient.io
featured
  1. ort

    Fast ML inference & training for ONNX models in Rust (by pykeio)

    To solve this, we built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper called Ort:

  2. CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  3. onnxruntime

    ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

    Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.

  4. supabase

    The open source Firebase alternative. Supabase gives you a dedicated Postgres database to build your web, mobile, and AI applications.

    Semantic search demo

  5. ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.

    LLM models are challenging to run directly via ONNX runtime on CPU. For these, we are using a GPU-accelerated Ollama server under the hood:

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Day 49: Serving LLMs with ONNX Runtime

    1 project | dev.to | 11 Dec 2024
  • Show HN: Krixik – Easily sequence small/specialized AI models (pip installable)

    1 project | news.ycombinator.com | 4 Nov 2024
  • [Python] How do we lazyload a Python module? - analyzing LazyLoader from MLflow

    3 projects | dev.to | 5 Oct 2024
  • Running Phi-3-vision via ONNX on Jetson Platform

    2 projects | dev.to | 19 Jul 2024
  • Show HN: Terge – an easy-to-use library for merging AI models

    1 project | news.ycombinator.com | 18 Jun 2024