lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. [Moved to: https://github.com/Lightning-AI/litgpt] (by Lightning-AI)
QLoRA-LLM
A simple custom QLoRA implementation for fine-tuning a language model (LLM) with basic tools such as PyTorch and Bitsandbytes, completely decoupled from Hugging Face. (by michaelnny)
lit-gpt | QLoRA-LLM | |
---|---|---|
4 | 1 | |
5,243 | 2 | |
- | - | |
9.6 | 6.5 | |
2 months ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lit-gpt
Posts with mentions or reviews of lit-gpt.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-11.
-
LLMs on your local Computer (Part 1)
git clone --depth=1 https://github.com/Lightning-AI/lit-gpt cd lit-gpt pip install -r requirements.txt pip install bitsandbytes==0.41.0 huggingface_hub python scripts/download.py --repo_id stabilityai/stablelm-zephyr-3b --from_safetensors=True python scripts/convert_hf_checkpoint.py --checkpoint_dir checkpoints/stabilityai/stablelm-zephyr-3b --dtype float32
- LoRA from Scratch implementation for LLM finetuning
-
My experience on starting with fine tuning LLMs with custom data
I'm also working on the finetuning of models for Q&A and I've finetuned llama-7b, falcon-40b, and oasst-pythia-12b using HuggingFace's SFT, H2OGPT's finetuning script and lit-gpt.
- finetune Falcon 40B in 30 minutes using LLaMA adapter
QLoRA-LLM
Posts with mentions or reviews of QLoRA-LLM.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-22.
-
LoRA from Scratch implementation for LLM finetuning
If anyone is interested in a more 'pure' or 'scratch' implementation, check out https://github.com/michaelnny/QLoRA-LLM. (author here) It also supports 4-bit quantized LoRA, using only PyTorch and bitsandbytes, without any other tools.
What are some alternatives?
When comparing lit-gpt and QLoRA-LLM you can also consider the following projects:
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
axolotl - Go ahead and axolotl questions
Nuggt - An Autonomous LLM Agent that runs on Wizcoder-15B
instructor-embedding - [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
semantra - Multi-tool for semantic search