cortex VS nnl

Compare cortex vs nnl and see what are their differences.

cortex

Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan (by janhq)

nnl

a low-latency and high-performance inference engine for large models on low-memory GPU platform. (by fengwang)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
cortex nnl
8 1
1,661 4
3.7% -
9.8 5.4
6 days ago 6 months ago
C++ C++
GNU Affero General Public License v3.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cortex

Posts with mentions or reviews of cortex. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-05.
  • Introducing Jan
    4 projects | dev.to | 5 May 2024
    Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA's TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan's Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    I'd like to see a comparison to nitro https://github.com/janhq/nitro which has been fantastic for running a local LLM.
  • FLaNK Weekly 08 Jan 2024
    41 projects | dev.to | 8 Jan 2024
  • Nitro: A fast, lightweight 3MB inference server with OpenAI-Compatible API
    9 projects | news.ycombinator.com | 5 Jan 2024
    Look... I appreciate a cool project, but this is probably not a good idea.

    > Built on top of the cutting-edge inference library llama.cpp, modified to be production ready.

    It's not. It's literally just llama.cpp -> https://github.com/janhq/nitro/blob/main/.gitmodules

    Llama.cpp makes no pretense at being a robust safe network ready library; it's a high performance library.

    You've made no changes to llama.cpp here; you're just calling the llama.cpp API directly from your drogon app.

    Hm.

    ...

    Look... that's interesting, but, honestly, I know there's this wave of "C++ is back!" stuff going on, but building network applications in C++ is very tricky to do right, and while this is cool, I'm not sure 'llama.cpp is in c++ because it needs to be fast' is a good reason to go 'so lets build a network server in c++ too!'.

    I mean, I guess you could argue that since llama.cpp is a C++ application, it's fair for them to offer their own server example with an openai compatible API (which you can read about here: https://github.com/ggerganov/llama.cpp/issues/4216, https://github.com/ggerganov/llama.cpp/blob/master/examples/...).

    ...but a production ready application?

    I wrote a rust binding to llama.cpp and my conclusion was that llama.cpp is pretty bleeding edge software, and bluntly, you should process isolate it from anything you really care about, if you want to avoid undefined behavior after long running inference sequences; because it updates very often, and often breaks. Those breaks are usually UB. It does not have a 'stable' version.

    Further more, when you run large models and run out of memory, C++ applications are notoriously unreliable in their 'handle OOM' behaviour.

    Soo.... I know there's something fun here, but really... unless you had a really really compelling reason to need to write your server software in c++ (and I see no compelling reason here), I'm curious why you would?

    It seems enormously risky.

    The quality of this code is 'fun', not 'production ready'.

  • Apple Silicon Llama 7B running in docker?
    5 projects | /r/LocalLLaMA | 7 Dec 2023
  • Is there any LLM that can be installed with out python
    2 projects | /r/LocalLLaMA | 5 Dec 2023

nnl

Posts with mentions or reviews of nnl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-03.
  • Run 70B LLM Inference on a Single 4GB GPU with This New Technique
    3 projects | news.ycombinator.com | 3 Dec 2023
    I did roughly the same thing in one of my hobby project https://github.com/fengwang/nnl. But in stead of using SSD, I load all the weights to the host memory, and while inferencing the model layer by layer, I asynchronously copy memory from global to shared memory in the hope of better performance. However, my approach is bounded by the PCI-E bandwidth.

What are some alternatives?

When comparing cortex and nnl you can also consider the following projects:

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

bionic-gpt - BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality

csvlens - Command line csv viewer

Tribuo - Tribuo - A Java machine learning library

hyperfine - A command-line benchmarking tool

java - Java bindings for TensorFlow

nitro - Next Generation Server Toolkit. Create web servers with everything you need and deploy them wherever you prefer.

steampipe - Zero-ETL, infinite possibilities. Live query APIs, code & more with SQL. No DB required.

llama-chat - Simple chat program for LLaMa models

pocketbase - Open Source realtime backend in 1 file

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.