SaaSHub helps you find the best software and product alternatives Learn more →
Alpaca-lora Alternatives
Similar projects and alternatives to alpaca-lora
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
-
-
-
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
-
-
-
llm
Discontinued [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models
-
-
-
LyCORIS
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
alpaca-lora discussion
alpaca-lora reviews and mentions
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "á¾¾" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
-
A note from our sponsor - SaaSHub
www.saashub.com | 9 Feb 2025
Stats
tloen/alpaca-lora is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of alpaca-lora is Jupyter Notebook.