simple-llm-finetuner
dalai
simple-llm-finetuner | dalai | |
---|---|---|
12 | 59 | |
1,977 | 13,060 | |
- | - | |
10.0 | 6.5 | |
5 months ago | 6 months ago | |
Jupyter Notebook | CSS | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simple-llm-finetuner
-
Ask HN: Resource to learn how to train and use ML Models
Just the appropriate reddit groups and follow folks on twitter, plus use a search engine.
1. Learn to run a model, checkout llama.cpp Tons of free models on huggingface.com
2. Learn to finetune a model - https://github.com/lxe/simple-llm-finetuner
3. Learn to train one. PyTorch, TensorFlow, HuggingFace libraries, etc.
Good luck.
- How can I train my custom dataset on top of Vicuna?
-
[D] The best way to train an LLM on company data
So as far as set up goes, you just need to: “”” Git clone https://github.com/lxe/simple-llama-finetuner Cd simple-llama-finetuner Pip install -r requirements.txt Python app.py ## if you’re on a remote machine (Paperspace is my go to) then you may need to edit the last line of this script to set ‘share=True’ in the launch args “””
-
Show HN: Document Q&A with GPT: web, .pdf, .docx, etc.
oobabooga's textgen webui has a tab for fine tuning now. You only need a single consumer GPU to fine tune up to 33B parameter models at a rate of about 200 epochs per hour, per GPU.
There are also one-click finetuning projects which run on free Google Colab GPUs like https://github.com/lxe/simple-llama-finetuner
It's easy and not complex at all.
-
How do I fine tune 4 bit or 8 bit models?
for a single 4090, easiest way to get started and simple to use: https://github.com/lxe/simple-llama-finetuner
- Are there publicly available datasets other than Alpaca that we can use to fine-tune LLaMA?
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
- [Project] Finetune LLaMA-7B on commodity GPUs (and Colab) using your own text
dalai
-
Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?
I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
-
Meta to release open-source commercial AI model
If you're just looking to play with something locally for the first time, this is the simplest project I've found and has a simple web UI: https://github.com/cocktailpeanut/dalai
It works for 7B/13B/30B/65B LLaMA and Alpaca (fine-tuned LLaMA which definitely works better). The smaller models at least should run on pretty much any computer.
- How can I run a large language model locally?
- meirl
-
FreedomGPT: AI with no censorship
I am not against easy mode options dude, for example I used to run GANs through command line. I replaced them with Upscayl when I found it. Convenience is king after all. Something about this one isn't right though. They are advertising it as a model they built meanwhile their own github show it to be a frontend of LLAMA. Why aren't they honest about it? Why use bots to spam about it? This causes me to not trust the executable they share to 1 to 1 compliation of the source code neither. I would still recommend looking for more decent alternatives. Btw, running it directly isn't that complicated
-
Google removes the waitlist on Bard today and will be available in 180 more countries
https://github.com/ggerganov/llama.cpp https://github.com/oobabooga/text-generation-webui https://github.com/mlc-ai/mlc-llm https://github.com/cocktailpeanut/dalai https://github.com/ido-pluto/catai (this is super easy to install but it doesnt provide an api or have integration with langchain)
-
ChatGPT Data Breach BreakDown - Why it Should be a Concern for Everyone!
This was easy to get running: https://github.com/cocktailpeanut/dalai with alpaca 13B (on my 16GB or ram)
-
A brief history of LLaMA models
I had it running before with Dalai (https://github.com/cocktailpeanut/dalai) but have since moved to using the browser based WebGPU method (https://mlc.ai/web-llm/) which uses Vicuna 7B and is quite good.
-
Meet Atom the GPT Assistant, an AI-powered Smart Home Assistant. It's like Google Assistant but with endless possibility of ChatGPT, it's like Siri but with extensibility of Open Source power.
https://github.com/nsarrazin/serge let's you pick which model and runs in a container. For API https://github.com/cocktailpeanut/dalai looks super promising.
- Mercredi Tech - 2023-04-26
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
gpt4all - gpt4all: run open-source LLMs anywhere
paper-qa - LLM Chain for answering questions from documents with citations
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
llama - Inference code for Llama models
Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.
minimal-llama
llama.cpp - LLM inference in C/C++
OpenChatKit
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.