LocalAI
basaran
LocalAI | basaran | |
---|---|---|
83 | 22 | |
19,862 | 1,281 | |
8.3% | - | |
9.9 | 10.0 | |
4 days ago | 3 months ago | |
C++ | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
basaran
- OpenLLM
-
Langchain and self hosted LLaMA hosted API
What are the current best "no reinventing the wheel" approaches to have Langchain use an LLM through a locally hosted REST API, the likes of Oobabooga or hyperonym/basaran with streaming support for 4-bit GPTQ?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
1-Jun-2023
open-source alternative to the OpenAI text completion API (https://github.com/hyperonym/basaran)
- Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
- Basaran is an open-source alternative to the OpenAI text completion API
-
Ask HN: What's the best self hosted/local alternative to GPT-4?
Guanaco-65B[0] using Basaran[1] for your OpenAI compatible API. You can use any ChatGPT front-end which lets you change the OpenAI endpoint URL.
[0] An fp4 finetune of LLaMA-30B by Tim Dettmers
[1] https://github.com/hyperonym/basaran
-
Are all the finetunes stupid?
For lm-eval, I think you'd either need to take GPTQ's inference script and shim it into a model: https://github.com/EleutherAI/lm-evaluation-harness/tree/master/lm_eval/models or you might be able to use a project like https://github.com/hyperonym/basaran and then you could use the gpt3 model...
-
Using the API in Node
There are also: - Basaran repo: "Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models". "...Compatibility with OpenAI API and client libraries..."; - llama-cpp-python repo: "Simple Python bindings for @ggerganov's llama.cpp library...". "...OpenAI-like API...".
-
Researcher looking for help with how to prepare a finetuning dataset for models like Bloomz and Cerebras-GPT
I want to start with a totally freely available model, so again, that excludes things like LLaMA where the weights are only available through a wait list. The two models that most get my attention and (I think, and hope) fit my criteria of open availability are Cerebras-GPT (13b) and Bloomz (7b). The tools to process and fine-tune that seem most feasible to me, from my limit knowledge, are xturing and basaran.
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-inference - Large Language Model Text Generation Inference
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
openai-chatgpt-opentranslator - Python command that uses openai to perform text translations
llama-cpp-python - Python bindings for llama.cpp
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llm-foundry - LLM training code for Databricks foundation models
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM