WebChatRWKVstic
ChatGPT-like Web UI for RWKVstic (by hizkifw)
alpaca-lora
Instruct-tune LLaMA on consumer hardware (by tloen)
WebChatRWKVstic | alpaca-lora | |
---|---|---|
1 | 107 | |
93 | 18,238 | |
- | - | |
10.0 | 3.6 | |
about 1 year ago | 3 months ago | |
Python | Jupyter Notebook | |
- | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WebChatRWKVstic
Posts with mentions or reviews of WebChatRWKVstic.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-21.
alpaca-lora
Posts with mentions or reviews of alpaca-lora.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-11.
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "ι" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
What are some alternatives?
When comparing WebChatRWKVstic and alpaca-lora you can also consider the following projects:
simple-llm-finetuner - Simple UI for LLM Model Finetuning
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
whisper.cpp - Port of OpenAI's Whisper model in C/C++
llama.cpp - LLM inference in C/C++
gpt4all - gpt4all: run open-source LLMs anywhere
minimal-llama
llama - Inference code for Llama models
OpenChatKit
ggml - Tensor library for machine learning
WebChatRWKVstic vs simple-llm-finetuner
alpaca-lora vs text-generation-webui
WebChatRWKVstic vs FlexGen
alpaca-lora vs qlora
WebChatRWKVstic vs whisper.cpp
alpaca-lora vs llama.cpp
WebChatRWKVstic vs text-generation-webui
alpaca-lora vs gpt4all
WebChatRWKVstic vs minimal-llama
alpaca-lora vs llama
WebChatRWKVstic vs OpenChatKit
alpaca-lora vs ggml