FastChat
open_llama
Our great sponsors
FastChat | open_llama | |
---|---|---|
82 | 52 | |
33,464 | 7,179 | |
4.8% | 1.2% | |
9.7 | 5.3 | |
6 days ago | 9 months ago | |
Python | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastChat
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio β Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
-
π₯π Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot π€π¬
Check how to start with FastChat. Support FastChat on GitHub β
-
Show HN: ChatAPI β PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
- FLaNK Stack Weekly 09 Oct 2023
open_llama
-
How Open is Generative AI? Part 2
The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Metaβs restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
-
GPT-4 API general availability
OpenLLaMA is though. https://github.com/openlm-research/open_llama
All of these are surmountable problems.
We can beat OpenAI.
We can drain their moat.
-
Recommend me a computer for local a.i for 500 $
#1: π Open-source Reproduction of Meta AIβs LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: π #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: π Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
-
Who is openllama from?
Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
https://github.com/openlm-research/open_llama#update-0615202...).
XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Compare it to openllama. It github doesn't have a single script on how to do anything.
-
Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:
-
Containerized AI before Apocalypse π³π€
The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
-
AI β weekly megathread!
OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama.cpp - LLM inference in C/C++
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
gpt4all - gpt4all: run open-source LLMs anywhere
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
gorilla - Gorilla: An API store for LLMs
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
ggml - Tensor library for machine learning
llama-cpp-python - Python bindings for llama.cpp
gpt-json - Structured and typehinted GPT responses in Python