peft

Open-source projects categorized as peft

Top 7 peft Open-Source Projects

  • LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

  • Project mention: FLaNK-AIM Weekly 06 May 2024 | dev.to | 2024-05-06
  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • xtuner

    An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

  • Project mention: PaliGemma: Open-Source Multimodal Model by Google | news.ycombinator.com | 2024-05-15
  • xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

  • Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07

    Explore the project on GitHub here.

  • LLaMA-LoRA-Tuner

    UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

  • relora

    Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

  • Project mention: LoRA Learns Less and Forgets Less | news.ycombinator.com | 2024-05-17

    This paper [1] does atempt that and reports similar performance compared to conventional pre-training. However, they do start off by doing a normal full-rank training and claim that it is needed to 'warm start' the training process.

    [1] https://arxiv.org/abs/2307.05695

  • mLoRA

    Provide Efficient LLM Fine-Tune via Multi-LoRA Optimization

  • Project mention: Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience? | /r/LocalLLaMA | 2023-12-06

    I want to train a Code LLaMA on some data, and I am looking for a Framework or Technique to train this on my PC with a 3090 Ti in it. In my research, I stumbled across the paper "ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU" https://arxiv.org/abs/2312.02515 with this GitHub project: https://github.com/TUDB-Labs/multi-lora-fine-tune.

  • lightning-mlflow-hf

    Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow

  • Project mention: Show HN: LoRA Tune LLM in Lightning on GPU | news.ycombinator.com | 2023-11-12
  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

peft discussion

Log in or Post with

peft related posts

Index

What are some of the best open-source peft projects? This list will help you:

Project Stars
1 LLaMA-Factory 23,973
2 xtuner 3,013
3 xTuring 2,545
4 LLaMA-LoRA-Tuner 430
5 relora 411
6 mLoRA 198
7 lightning-mlflow-hf 47

Sponsored
Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com