llama.onnx
LocalAI
llama.onnx | LocalAI | |
---|---|---|
2 | 83 | |
324 | 20,076 | |
- | 9.3% | |
7.3 | 9.9 | |
10 months ago | 7 days ago | |
Python | C++ | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama.onnx
-
Qnap TS-264
You can find LLM models in the onnx format here: https://github.com/tpoisonooo/llama.onnx
-
Langchain question and answer without openai
You also need a LLM to do this. Please check this out to pick one up from the llama family. Other works like llama.onnx, alpaca-native and llama model on hugging face are also worth checking.
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
What are some alternatives?
llama.cpp - LLM inference in C/C++
gpt4all - gpt4all: run open-source LLMs anywhere
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
llama-cpp-python - Python bindings for llama.cpp
motorhead - 🧠 Motorhead is a memory and information retrieval server for LLMs.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
AST-1 - Join the movement led by IZX.ai to create the world's best open-source LLM.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama2.openvino - This sample shows how to implement a llama-based model with OpenVINO runtime
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.