SaaSHub helps you find the best software and product alternatives Learn more →
FastChat Alternatives
Similar projects and alternatives to FastChat
-
text-generation-webui
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
-
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
gpt4all
gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue
-
-
-
-
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
LocalAI
:robot: Self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. No GPU required. LocalAI is a RESTful API to run ggml compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!
-
-
CASALIOY
CASALIOY ♾️ The best toolkit for air-gapped LLMs on consumer-grade hardware
-
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
-
MiniGPT-4
MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
-
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
-
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
-
-
web-llm
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
FastChat reviews and mentions
-
Looking for a Finetuning Guide
Each row is a example, I'm thinking to instruct tune (https://huggingface.co/datasets/databricks/databricks-dolly-15k) and chat tune https://github.com/lm-sys/FastChat/blob/main/playground/data/dummy.json some models. 10k+ seems to be the ideal number but I'd also like to know get a feel for where the lower limit is.
- samantha-7b
-
[D] High-quality, open-source implementations of LLMs
there's also Vicuna: https://github.com/lm-sys/FastChat
-
WizardLM-30B-Uncensored
Here is the codebase and dataset for WizardVicuna https://github.com/melodysdreamj/WizardVicunaLM https://github.com/lm-sys/FastChat https://huggingface.co/datasets/RyokoAI/ShareGPT52K
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
I used the FastChat API to load two quantized Vicuna-13 models locally so I could repeatedly query them for the modern translation of a given paragraph from the complete works of Jonathan Swift. Then I LoRa+PEFTed Llama-7b to convert from modern English to Swift. Works great: https://huggingface.co/pcalhoun/LLaMA-7b-JonathanSwift
-
OpenAI readies new open-source AI model
The questions can be found here.You don't have to guess. https://github.com/lm-sys/FastChat/blob/main/fastchat/eval/table/question.jsonl
-
How to run Llama 13B with a 6GB graphics card
These days I use FastChat: https://github.com/lm-sys/FastChat
It’s not based on llama.cpp but huggingface transformers but can also run on CPU.
It works well, can be distributed and very conveniently provide the same REST API than OpenAI GPT.
-
You guys are missing out on GPT4-x Vicuna
The user message includes the entire ordeal, with as much newlines or punctuation as you need. It’s just that the actual Python implementation shows that there is a single space (defined where it says sep=" ",) between the user message and the beginning of assistant output.
-
Air-gapped langchain Agent. Talk to your Data privately
You might want to check out this detailed too repo
-
Difference between oobabooga, lmsys.org and stability.ai?
I was searching for selfhosted chatgpt alternatives, that I can train my own data on. I found 3 interesting projects, but I don't really understand what's the difference between and how to choose one: https://github.com/oobabooga/text-generation-webui https://github.com/lm-sys/FastChat https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007f09210fcbd0>
www.saashub.com | 3 Jun 2023
Stats
lm-sys/FastChat is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of FastChat is Python.