unsloth
llama.cpp
unsloth | llama.cpp | |
---|---|---|
15 | 777 | |
8,974 | 57,984 | |
42.8% | - | |
9.4 | 10.0 | |
3 days ago | about 2 hours ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsloth
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
Gemma 7b is 2.4x faster than HF + FA2.
Check out https://github.com/unslothai/unsloth for full benchmarks!
-
Gemma doesn't suck anymore – 8 bug fixes
Here are the missing links:
* Gemma, a family of open models from Google: https://ai.google.dev/gemma
* Unsloth is a tool/method for training models faster (IIUC): https://github.com/unslothai/unsloth
-
AMD ROCm Software Blogs
Thanks! Again, partnerships over customers. If you're experienced and have the technical chops to make a MI300x sing, we want to work with you. Our model is that we are the capex/opex investor for businesses. As much as I love software, Hot Aisle is more of a hardware business. Running super high end large scale compute is an extreme challenge in itself. We are less interested in building the software side of things and want to foster those who can focus on that side.
https://github.com/unslothai/unsloth/issues/160
https://github.com/search?q=repo%3Apredibase%2Florax+rocm&ty...
https://github.com/sgl-project/sglang/issues/157
https://github.com/casper-hansen/AutoAWQ (supports rocm)
-
Show HN: We got fine-tuning Mistral-7B to not suck
Unsloth’s colab notebooks for fine-tuning Mistral-7B are super easy to use and run fine in just about any colab instance:
https://github.com/unslothai/unsloth
It’s my default now for experimenting and basic training. If I want to get into the weeds with the training, I use axolotl, but 9/10, it’s not really necessary.
-
Mistral 7B Fine-Tune Optimized
If anyone wants to finetune their own Mistral 7b model 2.2x faster and use 62% less memory - give our open source package Unsloth a try! https://github.com/unslothai/unsloth a try! :)
-
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience?
https://github.com/unslothai/unsloth seems good and more relevant to your aims perhaps but I haven't tried it.
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
The unsloth project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)
- Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
-
80% faster, 50% less memory, 0% accuracy loss Llama finetuning
This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?
[0]: https://github.com/unslothai/unsloth
- 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
nanoChatGPT - nanogpt turned into a chat model
gpt4all - gpt4all: run open-source LLMs anywhere
gpt-fast - Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
ggml - Tensor library for machine learning
uniteai - Your AI Stack in Your Editor
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM