llm-foundry
text-generation-webui
llm-foundry | text-generation-webui | |
---|---|---|
37 | 876 | |
3,730 | 36,827 | |
4.0% | - | |
9.7 | 9.9 | |
4 days ago | 5 days ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-foundry
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
Related comment from gwern: https://news.ycombinator.com/item?id=38438859
Also - why qlora rather than a full finetune? Using LambdaLabs, It'd cost roughly the same as your quote. Cheaper I think if you're willing to gamble with fp8: https://github.com/mosaicml/llm-foundry/tree/main/scripts/tr.... And fewer hyperparameters to tune as well
-
Consortium launched to build the largest open LLM
Traditionally, training runs can "explode" and fail, but there are methods to incrementally back them up and resume when that happens, see https://www.mosaicml.com/blog/mpt-7b
-
Applying All Recent Innovations To Train a Code Model
MosaicML released the MPT-7B model, which has a context of 60k tokens, thanks to the ALiBi position encoding.
-
Fine Tuning Language Models
Most AI runners just ignore licensing and run LLaMA finetunes.
But if you want to avoid the non commercial LLaMA license, you have 3 good options for a base model.
- OpenLlama 13B
- MPT 30B
- Falcon 40B
Of these, Falcon 40B is very difficult to run (slow in 4 bit, basically requires a professional GPU, no good cpu offloading yet).
OpenLLaMA 13B only supports a context size of 2048 as of today... But that could change soon.
So you probably want MPT instruct 30B, specifically this one:
https://huggingface.co/TheBloke/mpt-30B-instruct-GGML
As the page says, you can try it out on a decent PC of your own with the OpenCL build of KoboldCPP. Change it to "instruct" mode, use the template on the page, offload as many layers as you can to your PC's dGPU, and run it in instruct mode. It may already work for your summarization needs.
If not, you can finetune it with MPT's code and summarization d
https://github.com/mosaicml/llm-foundry
Or train OpenLLaMA 13B with SuperHOT + summarization data using QLORA.
-
Finetune MPT-30B using QLORA
BTW. they finally merged a MPT patch to work with lora: https://github.com/mosaicml/llm-foundry/issues/304
- [N] Meet MPT-30B: A Fully OpenSouce LLM that Outperforms GPT-3 - Dr. Mandar Karhade, MD. PhD.
-
MPT-30B QLoRA on 24 GB VRAM
Did you run into this error while using qlora on MPT30b?: https://github.com/mosaicml/llm-foundry/issues/413
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Yes? Their github is under Apache, their base model is under apache, the training data is not theirs, and they provide scripts how to convert it for the pretrain step. They have scripts for pretraining and finetuning as well. Basically for everything.
-
Best model for commercial use?
mosaicml/llm-foundry: LLM training code for MosaicML foundation models (github.com)
-
MosaicML launches MPT-30B: A new open-source model that outperforms GPT-3
MosaicML, a company that provides a platform for training and deploying large language models (LLMs), has recently released its second open-source foundation model called MPT-30B. The model is part of the MosaicML Foundation Series and comes after the smaller MPT-7B model that was launched in May 2023.
text-generation-webui
-
Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
Some of the tools offer a path to doing tool use (fetching URLs and doing things with them) or RAG (searching your documents). I think Oobabooga https://github.com/oobabooga/text-generation-webui offers the latter through plugins.
Our tool, https://github.com/transformerlab/transformerlab-app also supports the latter (document search) using local llms.
-
Ask HN: How to get started with local language models?
You can use webui https://github.com/oobabooga/text-generation-webui
Once you get a version up and running I make a copy before I update it as several times updates have broken my working version and caused headaches.
a decent explanation of parameters outside of reading archive papers: https://github.com/oobabooga/text-generation-webui/wiki/03-%...
a news ai website:
-
text-generation-webui VS LibreChat - a user suggested alternative
2 projects | 29 Feb 2024
- Show HN: I made an app to use local AI as daily driver
-
Ask HN: People who switched from GPT to their own models. How was it?
The other answers are recommending paths which give you #1. less control and #2. projects with smaller eco-systems.
If you want a truly general purpose front-end for LLMs, the only good solution right now is oobabooga: https://github.com/oobabooga/text-generation-webui
All other alternatives have only small fractions of the features that oobabooga supports. All other alternatives only support a fraction of the LLM backends that oobabooga supports, etc.
-
AI Girlfriend Is a Data-Harvesting Horror Show
The example waifu in text-generation-webui is good enough for me.
https://github.com/oobabooga/text-generation-webui/blob/main...
-
Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
> Downloading text-generation-webui takes a minute, let's you use any model and get going.
What you're missing here is you're already in this area deep enough to know what ooogoababagababa text-generation-webui is. Let's back out to the "average Windows desktop user" level. Assuming they even know how to find it:
1) Go to https://github.com/oobabooga/text-generation-webui?tab=readm...
2) See a bunch of instructions opening a terminal window and running random batch/powershell scripts. Powershell, etc will likely prompt you with a scary warning. Then you start wondering who ooobabagagagaba is...
3) Assuming you get this far (many users won't even get to step 1) you're greeted with a web interface[0] FILLED to the brim with technical jargon and extremely overwhelming options just to get a model loaded, which is another mind warp because you get to try to select between a bunch of random models with no clear meaning and non-sensical/joke sounding names from someone called "TheBloke". Ok...
Let's say you somehow braved this gauntlet and get this far now you get to chat with it. Ok, what about my local documents? text-generation-webui itself has nothing for that. Repeat this process over the 10 random open source projects from a bunch of names you've never heard of in an attempt to accomplish that.
This is "I saw this thing from Nvidia explode all over media, twitter, youtube, etc. I downloaded it from Nvidia, double-clicked, pointed it at a folder with documents, and it works".
That's the difference and it's very significant.
[0] - https://raw.githubusercontent.com/oobabooga/screenshots/main...
-
Ask HN: What are your top 3 coolest software engineering tools?
Maybe a copout answer, but setting up a local LLM on my development machine has been invaluable. I use Deep Seek Coder 6.7 [0] and Oobabooga's UI [1]. It helps me solve simple problems and find bugs, while still leaving the larger architecture decisions to me.
[0] https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instr...
[1] https://github.com/oobabooga/text-generation-webui
-
Meta AI releases Code Llama 70B
You can download it and run it with [this](https://github.com/oobabooga/text-generation-webui). There's an API mode that you could leverage from your VS Code extension.
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
What are some alternatives?
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
llama.cpp - LLM inference in C/C++
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
gpt4all - gpt4all: run open-source LLMs anywhere
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
KoboldAI-Client
llm-numbers - Numbers every LLM developer should know
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.