llm-foundry
chatbot-ui
llm-foundry | chatbot-ui | |
---|---|---|
37 | 63 | |
3,730 | 26,451 | |
4.0% | - | |
9.7 | 9.4 | |
4 days ago | 4 days ago | |
Python | TypeScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-foundry
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
Related comment from gwern: https://news.ycombinator.com/item?id=38438859
Also - why qlora rather than a full finetune? Using LambdaLabs, It'd cost roughly the same as your quote. Cheaper I think if you're willing to gamble with fp8: https://github.com/mosaicml/llm-foundry/tree/main/scripts/tr.... And fewer hyperparameters to tune as well
-
Consortium launched to build the largest open LLM
Traditionally, training runs can "explode" and fail, but there are methods to incrementally back them up and resume when that happens, see https://www.mosaicml.com/blog/mpt-7b
-
Applying All Recent Innovations To Train a Code Model
MosaicML released the MPT-7B model, which has a context of 60k tokens, thanks to the ALiBi position encoding.
-
Fine Tuning Language Models
Most AI runners just ignore licensing and run LLaMA finetunes.
But if you want to avoid the non commercial LLaMA license, you have 3 good options for a base model.
- OpenLlama 13B
- MPT 30B
- Falcon 40B
Of these, Falcon 40B is very difficult to run (slow in 4 bit, basically requires a professional GPU, no good cpu offloading yet).
OpenLLaMA 13B only supports a context size of 2048 as of today... But that could change soon.
So you probably want MPT instruct 30B, specifically this one:
https://huggingface.co/TheBloke/mpt-30B-instruct-GGML
As the page says, you can try it out on a decent PC of your own with the OpenCL build of KoboldCPP. Change it to "instruct" mode, use the template on the page, offload as many layers as you can to your PC's dGPU, and run it in instruct mode. It may already work for your summarization needs.
If not, you can finetune it with MPT's code and summarization d
https://github.com/mosaicml/llm-foundry
Or train OpenLLaMA 13B with SuperHOT + summarization data using QLORA.
-
Finetune MPT-30B using QLORA
BTW. they finally merged a MPT patch to work with lora: https://github.com/mosaicml/llm-foundry/issues/304
- [N] Meet MPT-30B: A Fully OpenSouce LLM that Outperforms GPT-3 - Dr. Mandar Karhade, MD. PhD.
-
MPT-30B QLoRA on 24 GB VRAM
Did you run into this error while using qlora on MPT30b?: https://github.com/mosaicml/llm-foundry/issues/413
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Yes? Their github is under Apache, their base model is under apache, the training data is not theirs, and they provide scripts how to convert it for the pretrain step. They have scripts for pretraining and finetuning as well. Basically for everything.
-
Best model for commercial use?
mosaicml/llm-foundry: LLM training code for MosaicML foundation models (github.com)
-
MosaicML launches MPT-30B: A new open-source model that outperforms GPT-3
MosaicML, a company that provides a platform for training and deploying large language models (LLMs), has recently released its second open-source foundation model called MPT-30B. The model is part of the MosaicML Foundation Series and comes after the smaller MPT-7B model that was launched in May 2023.
chatbot-ui
-
AI programming tools should be added to the Joel Test
One of the first things we did when GPT-4 became available was talk to our Azure rep and get access to the OpenAI models that they'd partnered with Microsoft to host in Azure. Now, we have our own private, not-datamined (so they claim, contractually) API endpoint and we use an OpenAI integration in VS Code[1] to connect to, allowing anyone in the company to use it to help them code.
I also spun up an internal chat UI[2] to replace ChatGPT so people can feel comfortable discussing proprietary data with the LLM endpoint.
The only thing that would make it more secure would be running inference engines internally, but I wouldn't have access to as good of models, and I'd need a _lot_ of hardware to match the speeds.
[1] - https://marketplace.visualstudio.com/items?itemName=AndrewBu...
[2] - https://github.com/mckaywrigley/chatbot-ui (legacy branch)
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
[3] https://github.com/mckaywrigley/chatbot-ui
-
Show HN: I made an app to use local AI as daily driver
Thank you for the work.
Please take this in a nice way: I can't see why I would use this over ChatbotUI+Ollama https://github.com/mckaywrigley/chatbot-ui
Seem the only advantage is having it as MacOS native app and only real distinction is maybe fast import and search - I've yet to try that though.
ChatbotUI (and other similar stuff) are cross-platform, customizable, private, debuggable. I'm easily able to see what it's trying to do.
-
ChatGPT for Teams
You can make a privacy request for OpenAI to not train on your data here: https://privacy.openai.com/
Alternatively, you could also use your own UI/API token (API calls aren't trained on). Chatbot UI just got a major update released and has nice things like folders, and chat search: https://github.com/mckaywrigley/chatbot-ui
- Chatbot UI 2.0
- webui similar to chatgpt
-
They made ChatGPT worse at coding for some reason, and it’s caused me to look at alternative AI options
Also chatbotUI is great https://github.com/mckaywrigley/chatbot-ui it has a ui similar to chatgpt
-
Please Don't Ask If an Open Source Project Is Dead
> The comment I screenshotted is passive-aggressive at best, and there's no really good way to ask "is this repo dead" without being passive-aggressive. My day-to-day job that actually pays me a salary wouldn't ever provide a bulleted list of the reasons I suck, let alone a project I develop in my spare time.
There is nothing passive-aggressive about that comment. There is nothing problematic about it at all. Nobody's calling you slurs or making demands. I see one guy who might as well be a Mormon Boy Scout from Canada. "Is this repo dead" is not passive-aggressive, just ineloquent. Fuck my eyes until the jelly leaks out my ears if a courteous and professionally-written question constitutes "applying pressure and being rude" these days.
I don't know what a "bulleted list of the reasons [you] suck" has to do with anything (I don't see where anybody sent you one) but you're coming across as someone who invites people to your garage sale and then brandishes a shotgun and starts screaming when they set foot on your property.
> I’ve never seen any discussions or articles about whether it’s appropriate to ask if an open source repository is dead. Is there an implicit contract to actively maintain any open source software you publish? Are you obligated to provide free support if you hit a certain star amount on GitHub or ask for funding through GitHub Sponsorships/Patreon? After all, most permissive open source code licenses like the MIT License contain some variant of “the software is provided ‘as is’, without warranty of any kind.”
Here's an example of why everyone should ask if an open source project is dead:
https://github.com/mckaywrigley/chatbot-ui/issues
A number of issues complain about it leaking OpenAI keys. Nobody's figured out how, but it'd be nice to know if anybody's working on it, if it's worth submitting a PR, if it should be forked, if it's worth bothering with at all. This code is a massive liability in its current state. Its creator is absent. It warrants questions being asked about its future. Yeah, it's as-is software, but it's not an affront to your mother's virtue when someone asks if your shit still works or if you have plans to fix it.
> I’ve had an existential crisis about my work in open source AI on GitHub, particularly as there has been both increasingly toxic backlash against AI and because the AI industry has been evolving so rapidly that I flat-out don’t have enough bandwidth to keep up
Herein lies the problem? You sound overwhelmed. I've been there myself. I don't know what your year's been like but you genuinely might want to get away from the screen and get some fresh air. This is a good time of year to do it, since things generally slow down at work.
- I need help with getting an API
- I need help with getting an api
What are some alternatives?
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
BetterChatGPT - An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
gpt4all - gpt4all: run open-source LLMs anywhere
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
Flowise - Drag & drop UI to build your customized LLM flow
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
chatgpt-clone - Enhanced ChatGPT Clone: Features OpenAI, Bing, PaLM 2, AI model switching, message search, langchain, Plugins, Multi-User System, Presets, completely open-source for self-hosting. More features in development [Moved to: https://github.com/danny-avila/LibreChat]
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llm-numbers - Numbers every LLM developer should know
turbogpt.ai