llm-foundry
basaran
llm-foundry | basaran | |
---|---|---|
37 | 22 | |
3,730 | 1,281 | |
4.0% | - | |
9.7 | 10.0 | |
4 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-foundry
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
Related comment from gwern: https://news.ycombinator.com/item?id=38438859
Also - why qlora rather than a full finetune? Using LambdaLabs, It'd cost roughly the same as your quote. Cheaper I think if you're willing to gamble with fp8: https://github.com/mosaicml/llm-foundry/tree/main/scripts/tr.... And fewer hyperparameters to tune as well
-
Consortium launched to build the largest open LLM
Traditionally, training runs can "explode" and fail, but there are methods to incrementally back them up and resume when that happens, see https://www.mosaicml.com/blog/mpt-7b
-
Applying All Recent Innovations To Train a Code Model
MosaicML released the MPT-7B model, which has a context of 60k tokens, thanks to the ALiBi position encoding.
-
Fine Tuning Language Models
Most AI runners just ignore licensing and run LLaMA finetunes.
But if you want to avoid the non commercial LLaMA license, you have 3 good options for a base model.
- OpenLlama 13B
- MPT 30B
- Falcon 40B
Of these, Falcon 40B is very difficult to run (slow in 4 bit, basically requires a professional GPU, no good cpu offloading yet).
OpenLLaMA 13B only supports a context size of 2048 as of today... But that could change soon.
So you probably want MPT instruct 30B, specifically this one:
https://huggingface.co/TheBloke/mpt-30B-instruct-GGML
As the page says, you can try it out on a decent PC of your own with the OpenCL build of KoboldCPP. Change it to "instruct" mode, use the template on the page, offload as many layers as you can to your PC's dGPU, and run it in instruct mode. It may already work for your summarization needs.
If not, you can finetune it with MPT's code and summarization d
https://github.com/mosaicml/llm-foundry
Or train OpenLLaMA 13B with SuperHOT + summarization data using QLORA.
-
Finetune MPT-30B using QLORA
BTW. they finally merged a MPT patch to work with lora: https://github.com/mosaicml/llm-foundry/issues/304
- [N] Meet MPT-30B: A Fully OpenSouce LLM that Outperforms GPT-3 - Dr. Mandar Karhade, MD. PhD.
-
MPT-30B QLoRA on 24 GB VRAM
Did you run into this error while using qlora on MPT30b?: https://github.com/mosaicml/llm-foundry/issues/413
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Yes? Their github is under Apache, their base model is under apache, the training data is not theirs, and they provide scripts how to convert it for the pretrain step. They have scripts for pretraining and finetuning as well. Basically for everything.
-
Best model for commercial use?
mosaicml/llm-foundry: LLM training code for MosaicML foundation models (github.com)
-
MosaicML launches MPT-30B: A new open-source model that outperforms GPT-3
MosaicML, a company that provides a platform for training and deploying large language models (LLMs), has recently released its second open-source foundation model called MPT-30B. The model is part of the MosaicML Foundation Series and comes after the smaller MPT-7B model that was launched in May 2023.
basaran
- OpenLLM
-
Langchain and self hosted LLaMA hosted API
What are the current best "no reinventing the wheel" approaches to have Langchain use an LLM through a locally hosted REST API, the likes of Oobabooga or hyperonym/basaran with streaming support for 4-bit GPTQ?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
1-Jun-2023
open-source alternative to the OpenAI text completion API (https://github.com/hyperonym/basaran)
- Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
- Basaran is an open-source alternative to the OpenAI text completion API
-
Ask HN: What's the best self hosted/local alternative to GPT-4?
Guanaco-65B[0] using Basaran[1] for your OpenAI compatible API. You can use any ChatGPT front-end which lets you change the OpenAI endpoint URL.
[0] An fp4 finetune of LLaMA-30B by Tim Dettmers
[1] https://github.com/hyperonym/basaran
-
Are all the finetunes stupid?
For lm-eval, I think you'd either need to take GPTQ's inference script and shim it into a model: https://github.com/EleutherAI/lm-evaluation-harness/tree/master/lm_eval/models or you might be able to use a project like https://github.com/hyperonym/basaran and then you could use the gpt3 model...
-
Using the API in Node
There are also: - Basaran repo: "Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models". "...Compatibility with OpenAI API and client libraries..."; - llama-cpp-python repo: "Simple Python bindings for @ggerganov's llama.cpp library...". "...OpenAI-like API...".
-
Researcher looking for help with how to prepare a finetuning dataset for models like Bloomz and Cerebras-GPT
I want to start with a totally freely available model, so again, that excludes things like LLaMA where the weights are only available through a wait list. The two models that most get my attention and (I think, and hope) fit my criteria of open availability are Cerebras-GPT (13b) and Bloomz (7b). The tools to process and fine-tune that seem most feasible to me, from my limit knowledge, are xturing and basaran.
What are some alternatives?
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
text-generation-inference - Large Language Model Text Generation Inference
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
openai-chatgpt-opentranslator - Python command that uses openai to perform text translations
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
llm-numbers - Numbers every LLM developer should know
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
lion-pytorch - 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch
lmql - A language for constraint-guided and efficient LLM programming.