LocalAI
FastChat
LocalAI | FastChat | |
---|---|---|
83 | 83 | |
19,862 | 34,277 | |
8.3% | 3.6% | |
9.9 | 9.6 | |
4 days ago | 3 days ago | |
C++ | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
FastChat
-
GPT4.5 or GPT5 being tested on LMSYS?
gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".
When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"
One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked
[0] https://github.com/lm-sys/FastChat/issues/2527
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio – Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
[1] https://github.com/lm-sys/FastChat
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
Check how to start with FastChat. Support FastChat on GitHub ⭐
-
Show HN: ChatAPI – PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama.cpp - LLM inference in C/C++
llama-cpp-python - Python bindings for llama.cpp
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.