unsloth
nanoChatGPT
unsloth | nanoChatGPT | |
---|---|---|
15 | 3 | |
8,282 | 48 | |
38.0% | - | |
9.4 | 9.3 | |
9 days ago | 8 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsloth
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
Gemma 7b is 2.4x faster than HF + FA2.
Check out https://github.com/unslothai/unsloth for full benchmarks!
-
Gemma doesn't suck anymore – 8 bug fixes
Here are the missing links:
* Gemma, a family of open models from Google: https://ai.google.dev/gemma
* Unsloth is a tool/method for training models faster (IIUC): https://github.com/unslothai/unsloth
-
AMD ROCm Software Blogs
Thanks! Again, partnerships over customers. If you're experienced and have the technical chops to make a MI300x sing, we want to work with you. Our model is that we are the capex/opex investor for businesses. As much as I love software, Hot Aisle is more of a hardware business. Running super high end large scale compute is an extreme challenge in itself. We are less interested in building the software side of things and want to foster those who can focus on that side.
https://github.com/unslothai/unsloth/issues/160
https://github.com/search?q=repo%3Apredibase%2Florax+rocm&ty...
https://github.com/sgl-project/sglang/issues/157
https://github.com/casper-hansen/AutoAWQ (supports rocm)
-
Show HN: We got fine-tuning Mistral-7B to not suck
Unsloth’s colab notebooks for fine-tuning Mistral-7B are super easy to use and run fine in just about any colab instance:
https://github.com/unslothai/unsloth
It’s my default now for experimenting and basic training. If I want to get into the weeds with the training, I use axolotl, but 9/10, it’s not really necessary.
-
Mistral 7B Fine-Tune Optimized
If anyone wants to finetune their own Mistral 7b model 2.2x faster and use 62% less memory - give our open source package Unsloth a try! https://github.com/unslothai/unsloth a try! :)
-
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience?
https://github.com/unslothai/unsloth seems good and more relevant to your aims perhaps but I haven't tried it.
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
The unsloth project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)
- Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
-
80% faster, 50% less memory, 0% accuracy loss Llama finetuning
This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?
[0]: https://github.com/unslothai/unsloth
- 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
nanoChatGPT
-
A full tutorial on turning GPT-2 into a conversational AI
Hi, Vatsa here, this is tutorial on turning GPT-2 into a conversational bot, it was a fun project, and I hope you like it it!
github -> https://github.com/VatsaDev/nanoChatGPT
- NanoChatGPT - turning nanogpt into a chat model/LLM
- NanoChatGpt, NanoGPT, Finetuned for Chatting
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
llm-toys - Small(7B and below) finetuned LLMs for a diverse set of useful tasks
llama.cpp - LLM inference in C/C++
DB-GPT-Hub - A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL
gpt-fast - Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
Zicklein - Finetuning instruct-LLaMA on german datasets.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
pistoBot - Create an AI that chats like you
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
praetor-data - Praetor is a lightweight finetuning data and prompt management tool [Moved to: https://github.com/US-Artificial-Intelligence/praetor-data]
uniteai - Your AI Stack in Your Editor
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6