motorhead
LocalAI
Our great sponsors
motorhead | LocalAI | |
---|---|---|
10 | 82 | |
822 | 19,593 | |
2.6% | 12.9% | |
8.0 | 9.9 | |
9 days ago | 4 days ago | |
Rust | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
motorhead
- Motorhead is a memory and information retrieval server for LLMs
-
Comparison of Vector Databases
Metal [1] is another one on my radar. Their API looks super simple.
Disclosures: None
[1] https://getmetal.io
-
Any Alternatives to Langchain?
Any alternatives? I found this Rust based project that might be interesting: https://github.com/getmetal/motorhead
- RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
-
Langchain question and answer without openai
you could run motorhead on docker https://github.com/getmetal/motorhead
-
How to use Enum with Vec to parse the mixed data vector from RedisSearch
The code is found using GitHub search FT.SEARCH inside https://github.com/getmetal/motorhead/blob/main/src/models.rs and adapted.
-
Memory in production
All the examples that Langchain gives are for persisting memory locally which won't work in a serverless (statelesss) environment, and the one solution documented for stateless applications, getmetal/motorhead, is a containerized, Rust-based service we would have to run ourselves.
- Show HN: Motörhead, LLM Memory Server Built in Rust
-
OpenAI Embeddings API alternative?
I've only just signed up and haven't had a chance to build anything with it yet, but this might be something to consider https://getmetal.io/
- Motörhead – memory and information retrieval server for LLMs
LocalAI
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
-
Retrieval Augmented Generation in Go
Neither of this really requires OpenAI. You can do it with locally-running models via something like https://github.com/mudler/LocalAI
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
gpt4all - gpt4all: run open-source LLMs anywhere
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
llama-cpp-python - Python bindings for llama.cpp
kor - LLM(😽)
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
Abstract Feature Branch - abstract_feature_branch is a Ruby gem that provides a variation on the Branch by Abstraction Pattern by Paul Hammant and the Feature Toggles Pattern by Martin Fowler (aka Feature Flags) to enable Continuous Integration and Trunk-Based Development.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
rasa-haystack
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.