motorhead
llama.cpp
motorhead | llama.cpp | |
---|---|---|
10 | 778 | |
829 | 57,984 | |
1.3% | - | |
7.7 | 10.0 | |
11 days ago | 3 days ago | |
Rust | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
motorhead
- Motorhead is a memory and information retrieval server for LLMs
-
Comparison of Vector Databases
Metal [1] is another one on my radar. Their API looks super simple.
Disclosures: None
[1] https://getmetal.io
-
Any Alternatives to Langchain?
Any alternatives? I found this Rust based project that might be interesting: https://github.com/getmetal/motorhead
- RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
-
Langchain question and answer without openai
you could run motorhead on docker https://github.com/getmetal/motorhead
-
How to use Enum with Vec to parse the mixed data vector from RedisSearch
The code is found using GitHub search FT.SEARCH inside https://github.com/getmetal/motorhead/blob/main/src/models.rs and adapted.
-
Memory in production
All the examples that Langchain gives are for persisting memory locally which won't work in a serverless (statelesss) environment, and the one solution documented for stateless applications, getmetal/motorhead, is a containerized, Rust-based service we would have to run ourselves.
- Show HN: Motörhead, LLM Memory Server Built in Rust
-
OpenAI Embeddings API alternative?
I've only just signed up and haven't had a chance to build anything with it yet, but this might be something to consider https://getmetal.io/
- Motörhead – memory and information retrieval server for LLMs
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
gpt4all - gpt4all: run open-source LLMs anywhere
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
kor - LLM(😽)
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
Abstract Feature Branch - abstract_feature_branch is a Ruby gem that provides a variation on the Branch by Abstraction Pattern by Paul Hammant and the Feature Toggles Pattern by Martin Fowler (aka Feature Flags) to enable Continuous Integration and Trunk-Based Development.
ggml - Tensor library for machine learning
rasa-haystack
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM