multi-lora-fine-tune VS LLaMA-Factory

Compare multi-lora-fine-tune vs LLaMA-Factory and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
multi-lora-fine-tune LLaMA-Factory
1 3
182 21,791
15.9% -
9.3 9.9
9 days ago 4 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

multi-lora-fine-tune

Posts with mentions or reviews of multi-lora-fine-tune. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.

LLaMA-Factory

Posts with mentions or reviews of LLaMA-Factory. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-06.
  • FLaNK-AIM Weekly 06 May 2024
    45 projects | dev.to | 6 May 2024
  • Show HN: GPU Prices on eBay
    1 project | news.ycombinator.com | 23 Feb 2024
    Depends what model you want to train, and how well you want your computer to keep working while you're doing it.

    If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.

    You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.

    Spend a bit more and you'll probably have a better time.

    [1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...

  • FLaNK Weekly 31 December 2023
    25 projects | dev.to | 31 Dec 2023

What are some alternatives?

When comparing multi-lora-fine-tune and LLaMA-Factory you can also consider the following projects:

unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory

KVQuant - KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

Finetune_LLMs - Repo for fine-tuning Casual LLMs

seatunnel - SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.

Anima - 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU

machinascript-for-robots - Build LLM-powered robots in your garage with MachinaScript For Robots!

efficient-kan - An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).

generative-ai-python - The Gemini API Python SDK enables developers to use Google's state-of-the-art generative AI models to build AI-powered features and applications.

FLaNK-Ice - Apache Iceberg - Cloud Data Lakehouse

promptbench - A unified evaluation framework for large language models

HALOs - A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).

kamal - Deploy web apps anywhere.