Nitro: A fast, lightweight 3MB inference server with OpenAI-Compatible API

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • llama.cpp

    LLM inference in C/C++

  • Look... I appreciate a cool project, but this is probably not a good idea.

    > Built on top of the cutting-edge inference library llama.cpp, modified to be production ready.

    It's not. It's literally just llama.cpp -> https://github.com/janhq/nitro/blob/main/.gitmodules

    Llama.cpp makes no pretense at being a robust safe network ready library; it's a high performance library.

    You've made no changes to llama.cpp here; you're just calling the llama.cpp API directly from your drogon app.

    Hm.

    ...

    Look... that's interesting, but, honestly, I know there's this wave of "C++ is back!" stuff going on, but building network applications in C++ is very tricky to do right, and while this is cool, I'm not sure 'llama.cpp is in c++ because it needs to be fast' is a good reason to go 'so lets build a network server in c++ too!'.

    I mean, I guess you could argue that since llama.cpp is a C++ application, it's fair for them to offer their own server example with an openai compatible API (which you can read about here: https://github.com/ggerganov/llama.cpp/issues/4216, https://github.com/ggerganov/llama.cpp/blob/master/examples/...).

    ...but a production ready application?

    I wrote a rust binding to llama.cpp and my conclusion was that llama.cpp is pretty bleeding edge software, and bluntly, you should process isolate it from anything you really care about, if you want to avoid undefined behavior after long running inference sequences; because it updates very often, and often breaks. Those breaks are usually UB. It does not have a 'stable' version.

    Further more, when you run large models and run out of memory, C++ applications are notoriously unreliable in their 'handle OOM' behaviour.

    Soo.... I know there's something fun here, but really... unless you had a really really compelling reason to need to write your server software in c++ (and I see no compelling reason here), I'm curious why you would?

    It seems enormously risky.

    The quality of this code is 'fun', not 'production ready'.

  • nitro

    An inference server on top of llama.cpp. OpenAI-compatible API, queue, & scaling. Embed a prod-ready, local inference engine in your apps. Powers Jan (by janhq)

  • Look... I appreciate a cool project, but this is probably not a good idea.

    > Built on top of the cutting-edge inference library llama.cpp, modified to be production ready.

    It's not. It's literally just llama.cpp -> https://github.com/janhq/nitro/blob/main/.gitmodules

    Llama.cpp makes no pretense at being a robust safe network ready library; it's a high performance library.

    You've made no changes to llama.cpp here; you're just calling the llama.cpp API directly from your drogon app.

    Hm.

    ...

    Look... that's interesting, but, honestly, I know there's this wave of "C++ is back!" stuff going on, but building network applications in C++ is very tricky to do right, and while this is cool, I'm not sure 'llama.cpp is in c++ because it needs to be fast' is a good reason to go 'so lets build a network server in c++ too!'.

    I mean, I guess you could argue that since llama.cpp is a C++ application, it's fair for them to offer their own server example with an openai compatible API (which you can read about here: https://github.com/ggerganov/llama.cpp/issues/4216, https://github.com/ggerganov/llama.cpp/blob/master/examples/...).

    ...but a production ready application?

    I wrote a rust binding to llama.cpp and my conclusion was that llama.cpp is pretty bleeding edge software, and bluntly, you should process isolate it from anything you really care about, if you want to avoid undefined behavior after long running inference sequences; because it updates very often, and often breaks. Those breaks are usually UB. It does not have a 'stable' version.

    Further more, when you run large models and run out of memory, C++ applications are notoriously unreliable in their 'handle OOM' behaviour.

    Soo.... I know there's something fun here, but really... unless you had a really really compelling reason to need to write your server software in c++ (and I see no compelling reason here), I'm curious why you would?

    It seems enormously risky.

    The quality of this code is 'fun', not 'production ready'.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • ollama

    Get up and running with Llama 3, Mistral, Gemma, and other large language models.

  • I recommend using https://ollama.ai/ if you dont want openai compatibility.

  • nitro

    Next Generation Server Toolkit. Create web servers with everything you need and deploy them wherever you prefer.

  • Not to be confused with https://nitro.unjs.io the server tech behind Nuxt and SolidStart

  • omnitool

    Official Omnitool repository

  • Our you could use something like omnitool (https://github.com/omnitool-ai/omnitool) and interface with both cloud and local AI, not limited to llms.

  • metal-cpp

    Metal-cpp is a low-overhead C++ interface for Metal that helps developers add Metal functionality to graphics apps, games, and game engines that are written in C++.

  • My understanding is the proliferation of “XYZ-cpp” AI frameworks is due to the c++ support in Apple’s gpu library ‘Metal’, and the popularity of apple silicon for inference (and there are a few technical reasons for this): https://developer.apple.com/metal/cpp/

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts