danswer VS llama.cpp

Compare danswer vs llama.cpp and see what are their differences.

danswer

Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge. (by danswer-ai)

llama.cpp

LLM inference in C/C++ (by ggerganov)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
danswer llama.cpp
28 795
9,619 60,282
5.0% -
9.9 10.0
3 days ago about 19 hours ago
Python C++
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

danswer

Posts with mentions or reviews of danswer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-15.
  • Ollama v0.1.45
    7 projects | news.ycombinator.com | 15 Jun 2024
    Sorry it's taking so long to review and for the radio silence on the PR.

    We have been trying to figure out how to support more structured output formats without some of the side effects of grammars. With JSON mode (which uses grammars under the hood) there were originally quite a few issue reports namely around lower performance and cases where the model would infinitely generate whitespace causing requests to hang. This is an issue with OpenAI's JSON mode as well which requires the caller to "instruct the model to produce JSON" [1]. While it's possible to handle edge cases for a single grammar such as JSON (i.e. check for 'JSON' in the prompt), it's hard to generalize this to any format.

    Supporting more structured output formats is definitely important. Fine-tuning for output formats is promising, and this thread [2] also has some great ideas and links.

    [1] https://platform.openai.com/docs/guides/text-generation/json...

    [2] https://github.com/ggerganov/llama.cpp/issues/4218

  • Apple Intelligence, the personal intelligence system
    4 projects | news.ycombinator.com | 10 Jun 2024
    > Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device.

    On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.

    There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.

    From this thread covering performance of llama.cpp on Apple Silicon M-series …

    https://github.com/ggerganov/llama.cpp/discussions/4167

    "Buy as much memory as you can afford would be my bottom line!"

  • Partial Outage on Claude.ai
    1 project | news.ycombinator.com | 4 Jun 2024
    I'd love to use local models, but seems like most of the easy to use software out there (LM Studio, Backyard AI, koboldcpp) doesn't really play all that nicely with my Intel Arc GPU and it's painfully slow on my Ryzen 5 4500. Even my M1 MacBook isn't that fast at generating text with even 7B models.

    I wonder if llama.cpp with SYCL could help, will have to try it out: https://github.com/ggerganov/llama.cpp/blob/master/README-sy...

    But even if that worked, I'd still have the problem that IDEs and whatever else I have open already eats most of the 32 GB of RAM my desktop PC has. Whereas if I ran a small code model on the MacBook and connected to it through my PC, it'd still probably be too slow for autocomplete, when compared to GitHub Copilot and less accurate than ChatGPT or Phind for most stuff.

  • Why YC Went to DC
    3 projects | news.ycombinator.com | 3 Jun 2024
    You're correct if you're focused exclusively on the work surrounding building foundation models to begin with. But if you take a broader view, having open models that we can legally fine tune and hack with locally has created a large and ever-growing community of builders and innovators that could not exist without these open models. Just take a look at projects like InvokeAI [0] in the image space or especially llama.cpp [1] in the text generation space. These projects are large, have lots of contributors, move very fast, and drive a lot of innovation and collaboration in applying AI to various domains in a way that simply wouldn't be possible without the open models.

    [0] https://github.com/invoke-ai/InvokeAI

    [1] https://github.com/ggerganov/llama.cpp

  • Show HN: Open-Source Load Balancer for Llama.cpp
    6 projects | news.ycombinator.com | 1 Jun 2024
  • RAG with llama.cpp and external API services
    2 projects | dev.to | 31 May 2024
    The first example will build an Embeddings database backed by llama.cpp vectorization.
  • Ask HN: I have many PDFs – what is the best local way to leverage AI for search?
    10 projects | news.ycombinator.com | 30 May 2024
    and at some point (https://github.com/ggerganov/llama.cpp/issues/7444)
  • Deploying llama.cpp on AWS (with Troubleshooting)
    1 project | dev.to | 28 May 2024
    git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp LLAMA_CUDA=1 make -j
  • Devoxx Genie Plugin : an Update
    6 projects | dev.to | 28 May 2024
    I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
  • Mistral Fine-Tune
    2 projects | news.ycombinator.com | 25 May 2024
    The output of the LLM is not just one token, but a statistical distribution across all possible output tokens. The tool you use to generate output will sample from this distribution with various techniques, and you can put constraints on it like not being too repetitive. Some of them support getting very specific about the allowed output format, e.g. https://github.com/ggerganov/llama.cpp/blob/master/grammars/... So even if the LLM says that an invalid token is the most likely next token, the tool will never select it for output. It will only sample from valid tokens.

What are some alternatives?

When comparing danswer and llama.cpp you can also consider the following projects:

private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.

gpt4all - gpt4all: run open-source LLMs anywhere

privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

freemusicdemixer.com - free website for client-side music demixing with Demucs + WebAssembly

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

khoj - Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.

ggml - Tensor library for machine learning

pekko-samples - Apache Pekko Sample Projects

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured