cometocoruna VS cortex

Compare cometocoruna vs cortex and see what are their differences.

cortex

Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan (by janhq)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
cometocoruna cortex
1 8
0 1,635
- 12.8%
8.2 9.8
4 months ago 6 days ago
JavaScript C++
- GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cometocoruna

Posts with mentions or reviews of cometocoruna. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-24.
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    Not really. You can use small models for task like text classification etc (traditional nlp) and those run in pretty much anything. We're talking about BERT-like models like distillbert for example.

    Now, models that have "reasoning" as an emergent property... I haven't seen anthing under 3B that's capable of making anything useful. The smaller I've seen is litellama and while it's not 100% useless, it's really just an experiment.

    Also, everything requires new and/or expensive hardware. For GPU you really are about 1k€ at minumum for something decent for running models. CPU inference is way slower and forget about anythin that has no AVX and preferably AVX2.

    I try models on my old thinkpad x260 with 8Gb ram, which is perfectly capable for developing stuff and those small task oriented I've told you about, but even though I've tried everything under the sun, with quantization etc, it's safe to say you can only run decent LLMs with a decent inference speed with expensive hardware now.

    Now, if you want task like, language detection, classifying text into categories, etc, very basic Question Answering, then go on HugginFace and try youself, you'll be capable of running most models on modest hardware.

    In fact, I have a website (https://github.com/iagovar/cometocoruna/tree/main) where I'm using a small flask server in my data pipeline to extract event information from text blobs I get scraping sites.

    Experts in the field say that might change (somewhat) with mamba models, but I can't really say more.

    I've been playing with the idea of dumping some money. But I'm 36, unemployed and just got into coding about 1.5 years ago, so until I secure some income I don't want to hit my saving hard, this is not the US where I can land a job easy (Junior looking for job, just in case someone here needs one).

cortex

Posts with mentions or reviews of cortex. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-05.
  • Introducing Jan
    4 projects | dev.to | 5 May 2024
    Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA's TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan's Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    I'd like to see a comparison to nitro https://github.com/janhq/nitro which has been fantastic for running a local LLM.
  • FLaNK Weekly 08 Jan 2024
    41 projects | dev.to | 8 Jan 2024
  • Nitro: A fast, lightweight 3MB inference server with OpenAI-Compatible API
    9 projects | news.ycombinator.com | 5 Jan 2024
    Look... I appreciate a cool project, but this is probably not a good idea.

    > Built on top of the cutting-edge inference library llama.cpp, modified to be production ready.

    It's not. It's literally just llama.cpp -> https://github.com/janhq/nitro/blob/main/.gitmodules

    Llama.cpp makes no pretense at being a robust safe network ready library; it's a high performance library.

    You've made no changes to llama.cpp here; you're just calling the llama.cpp API directly from your drogon app.

    Hm.

    ...

    Look... that's interesting, but, honestly, I know there's this wave of "C++ is back!" stuff going on, but building network applications in C++ is very tricky to do right, and while this is cool, I'm not sure 'llama.cpp is in c++ because it needs to be fast' is a good reason to go 'so lets build a network server in c++ too!'.

    I mean, I guess you could argue that since llama.cpp is a C++ application, it's fair for them to offer their own server example with an openai compatible API (which you can read about here: https://github.com/ggerganov/llama.cpp/issues/4216, https://github.com/ggerganov/llama.cpp/blob/master/examples/...).

    ...but a production ready application?

    I wrote a rust binding to llama.cpp and my conclusion was that llama.cpp is pretty bleeding edge software, and bluntly, you should process isolate it from anything you really care about, if you want to avoid undefined behavior after long running inference sequences; because it updates very often, and often breaks. Those breaks are usually UB. It does not have a 'stable' version.

    Further more, when you run large models and run out of memory, C++ applications are notoriously unreliable in their 'handle OOM' behaviour.

    Soo.... I know there's something fun here, but really... unless you had a really really compelling reason to need to write your server software in c++ (and I see no compelling reason here), I'm curious why you would?

    It seems enormously risky.

    The quality of this code is 'fun', not 'production ready'.

  • Apple Silicon Llama 7B running in docker?
    5 projects | /r/LocalLLaMA | 7 Dec 2023
  • Is there any LLM that can be installed with out python
    2 projects | /r/LocalLLaMA | 5 Dec 2023

What are some alternatives?

When comparing cometocoruna and cortex you can also consider the following projects:

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

bionic-gpt - BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality

csvlens - Command line csv viewer

nnl - a low-latency and high-performance inference engine for large models on low-memory GPU platform.

Tribuo - Tribuo - A Java machine learning library

hyperfine - A command-line benchmarking tool

java - Java bindings for TensorFlow

nitro - Next Generation Server Toolkit. Create web servers with everything you need and deploy them wherever you prefer.

steampipe - Zero-ETL, infinite possibilities. Live query APIs, code & more with SQL. No DB required.

llama-chat - Simple chat program for LLaMa models

pocketbase - Open Source realtime backend in 1 file

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.