SaaSHub helps you find the best software and product alternatives Learn more →
Similar projects and alternatives to FastChat
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
Port of Facebook's LLaMA model in C/C++
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue
Inference code for LLaMA models
8-bit CUDA functions for PyTorch
The simplest way to run LLaMA on your local machine
Instruct-tune LLaMA on consumer hardware
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
:robot: Self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. No GPU required. LocalAI is a RESTful API to run ggml compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!
An experimental open-source attempt to make GPT-4 fully autonomous.
CASALIOY ♾️ The best toolkit for air-gapped LLMs on consumer-grade hardware
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
Code and documentation to train Stanford's Alpaca models, and generate the data.
Locally run an Instruction-Tuned Chat-Style LLM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
🤖 A list of open LLMs available for commercial use.
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
Python bindings for llama.cpp
A simple one-file way to run various GGML models with KoboldAI's UI
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
FastChat reviews and mentions
Looking for a Finetuning Guide
2 projects | /r/LocalLLaMA | 29 May 2023
Each row is a example, I'm thinking to instruct tune (https://huggingface.co/datasets/databricks/databricks-dolly-15k) and chat tune https://github.com/lm-sys/FastChat/blob/main/playground/data/dummy.json some models. 10k+ seems to be the ideal number but I'd also like to know get a feel for where the lower limit is.
5 projects | /r/LocalLLaMA | 28 May 2023
[D] High-quality, open-source implementations of LLMs
6 projects | /r/MachineLearning | 22 May 2023
there's also Vicuna: https://github.com/lm-sys/FastChat
11 projects | /r/LocalLLaMA | 22 May 2023
Here is the codebase and dataset for WizardVicuna https://github.com/melodysdreamj/WizardVicunaLM https://github.com/lm-sys/FastChat https://huggingface.co/datasets/RyokoAI/ShareGPT52K
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
8 projects | /r/LocalLLaMA | 16 May 2023
I used the FastChat API to load two quantized Vicuna-13 models locally so I could repeatedly query them for the modern translation of a given paragraph from the complete works of Jonathan Swift. Then I LoRa+PEFTed Llama-7b to convert from modern English to Swift. Works great: https://huggingface.co/pcalhoun/LLaMA-7b-JonathanSwift
OpenAI readies new open-source AI model
2 projects | /r/singularity | 16 May 2023
The questions can be found here.You don't have to guess. https://github.com/lm-sys/FastChat/blob/main/fastchat/eval/table/question.jsonl
How to run Llama 13B with a 6GB graphics card
12 projects | news.ycombinator.com | 14 May 2023
These days I use FastChat: https://github.com/lm-sys/FastChat
It’s not based on llama.cpp but huggingface transformers but can also run on CPU.
It works well, can be distributed and very conveniently provide the same REST API than OpenAI GPT.
You guys are missing out on GPT4-x Vicuna
2 projects | /r/LocalLLaMA | 12 May 2023
The user message includes the entire ordeal, with as much newlines or punctuation as you need. It’s just that the actual Python implementation shows that there is a single space (defined where it says sep=" ",) between the user message and the beginning of assistant output.
Air-gapped langchain Agent. Talk to your Data privately
5 projects | /r/LangChain | 10 May 2023
You might want to check out this detailed too repo
Difference between oobabooga, lmsys.org and stability.ai?
2 projects | /r/Oobabooga | 8 May 2023
I was searching for selfhosted chatgpt alternatives, that I can train my own data on. I found 3 interesting projects, but I don't really understand what's the difference between and how to choose one: https://github.com/oobabooga/text-generation-webui https://github.com/lm-sys/FastChat https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot
A note from our sponsor - #<SponsorshipServiceOld:0x00007f09210fcbd0>
www.saashub.com | 3 Jun 2023
lm-sys/FastChat is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of FastChat is Python.