Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today. Learn more →
Llm_finetuning Alternatives
Similar projects and alternatives to llm_finetuning
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a better llm_finetuning alternative or higher similarity.
llm_finetuning discussion
llm_finetuning reviews and mentions
Posts with mentions or reviews of llm_finetuning.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-07-10.
- A beginner-friendly repo for fine-tuning LLMs with different quantization techniques in one package. There is also sample guide to deploy your own API server or chatUI.
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
I also create a short summary at https://github.com/taprosoft/llm_finetuning/blob/main/benchmark/README.md to compare the performance difference between popular quantization techniques. GPTQ seems to hold a good advantage in term of speed in compare to 4-bit quantization from bitsandbytes.
-
A note from our sponsor - Scout Monitoring
www.scoutapm.com | 15 Jun 2024
Stats
Basic llm_finetuning repo stats
5
133
6.8
8 months ago
taprosoft/llm_finetuning is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of llm_finetuning is Python.