LoRA VS LMFlow

Compare LoRA vs LMFlow and see what are their differences.

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" (by microsoft)

LMFlow

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All. (by OptimalScale)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
LoRA LMFlow
34 10
9,046 8,000
3.3% 3.0%
5.4 9.6
about 2 months ago 6 days ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LoRA

Posts with mentions or reviews of LoRA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-08.
  • DECT NR+: A technical dive into non-cellular 5G
    1 project | news.ycombinator.com | 2 Apr 2024
    This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.
  • Training LLMs Taking Too Much Time? Technique you need to know to train it faster
    1 project | dev.to | 3 Mar 2024
    So to solve this, we tried researching into some optimization techniques and we found LoRA, Which stands for Low-Rank Adaptation of Large Language Models.
  • OpenAI employee: GPT-4.5 rumor was a hallucination
    1 project | news.ycombinator.com | 17 Dec 2023
    > Anyone have any ideas / knowledge on how they deploy little incremental fixes to exploited jailbreaks, etc?

    LoRa[1] would be my guess.

    For detailed explanation I recommend the paper. But the short explanation is that it is a trick which lets you train a smaller, lower dimensional model which when you add to the original model it gets you the result you want.

    1: https://arxiv.org/abs/2106.09685

  • Can a LoRa be used on models other than Stable Diffusion?
    2 projects | /r/StableDiffusion | 8 Dec 2023
    LoRA was initially developed for large language models, https://arxiv.org/abs/2106.09685 (2021). It was later that people discovered that it worked REALLY well for diffusion models.
  • StyleTTS2 – open-source Eleven Labs quality Text To Speech
    10 projects | news.ycombinator.com | 19 Nov 2023
    Curious if we'll see a Civitai-style LoRA[1] marketplace for text-to-speech models.

    1 = https://github.com/microsoft/LoRA

  • Andreessen Horowitz Invests in Civitai, Which Profits from Nonconsensual AI Porn
    1 project | news.ycombinator.com | 14 Nov 2023
    From https://arxiv.org/abs/2106.09685:

    > LoRA: Low-Rank Adaptation of Large Language Models

    > An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.

  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Yes, your understanding is correct. However, instead of adding a head on top of the network, most fine-tuning is currently done with LoRA (https://github.com/microsoft/LoRA). This introduces low-rank matrices between different layers of your models, those are then trained using your training data while the rest of the models' weights are frozen.
  • Run LLMs at home, BitTorrent‑style
    10 projects | news.ycombinator.com | 17 Sep 2023
    Somewhat yes. See "LoRA": https://arxiv.org/abs/2106.09685

    They're not composable in the sense that you can take these adaptation layers and arbitrarily combine them, but training different models while sharing a common base of weights is a solved problem.

  • New LoRa RF distance record: 1336 km / 830 mi
    1 project | news.ycombinator.com | 7 Sep 2023
    With all the naive AI zealotry on HN can you really fault me?

    They're referring to this:

    https://arxiv.org/abs/2106.09685

  • Open-source Fine-Tuning on Codebase with Refact
    2 projects | dev.to | 5 Sep 2023
    It's possible to fine-tune all parameters (called "full fine-tune"), but recently PEFT methods became popular. PEFT stands for Parameter-Efficient Fine-Tuning. There are several methods available, the most popular so far is LoRA (2106.09685) that can train less than 1% of the original weights. LoRA has one important parameter -- tensor size, called lora_r. It defines how much information LoRA can add to the network. If your codebase is small, the fine-tuning process will see the same data over and over again, many times in a loop. We found that for a smaller codebase small LoRA tensors work best because it won't overfit as much -- the tensors just don't have the capacity to fit the limited training set exactly. As the codebase gets bigger, tensors should become bigger as well. We also unfreeze token embeddings at a certain codebase size. To pick all the parameters automatically, we have developed a heuristic that calculates a score based on the source files it sees. This score is then used to determine the appropriate LoRA size, number of finetuning steps, and other parameters. We have tested this heuristic on several beta test clients, small codebases of several files, and large codebases like the Linux kernel (consisting of about 50,000 useful source files). If the heuristic doesn't work for you for whatever reason, you can set all the parameters yourself.

LMFlow

Posts with mentions or reviews of LMFlow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-03.
  • Your weekly machine learning digest
    2 projects | /r/learnmachinelearning | 3 Jul 2023
  • Any guide/intro to fine-tuning anywhere?
    5 projects | /r/LocalLLaMA | 28 Jun 2023
    You might want to have a look at LMFlow.
  • Robin V2 Launches: Achieves Unparalleled Performance on OpenLLM!
    2 projects | /r/machinelearningnews | 15 Jun 2023
  • [D] Have you tried fine-tuning an open source LLM?
    6 projects | /r/MachineLearning | 13 May 2023
    I'd like to recommend LMFlow (https://github.com/OptimalScale/LMFlow), a fast and extensible toolkit for finetuning and inference of large foundation models.
  • [R] DetGPT: Detect What You Need via Reasoning
    2 projects | /r/MachineLearning | 12 May 2023
    The "reasoning-based object detection" is a challenging problem because the detector needs to understand and reason about the user's coarse-grained/abstract instructions and analyze the current visual information to locate the target object accurately. In this direction, researchers from the Hong Kong University of Science and Technology and the University of Hong Kong have conducted some preliminary explorations. Specifically, they use a pre-trained visual encoder (BLIP-2) to extract visual features from images and align the visual features to the text space using an alignment function. They use a large-scale language model (Robin/Vicuna) to understand the user's question, combined with the visual information they see, to reason about the objects that users are truly interested in. Then, they provide the object names to the pre-trained detector (Grounding-DINO) for specific location prediction. In this way, the model can analyze the image based on any user instructions and accurately predict the location of the object of interest to the user. It is worth noting that the difficulty here mainly lies in the fact that the model needs to achieve task-specific output formats for different specific tasks as much as possible without damaging the model's original abilities. To guide the language model to follow specific patterns and generate outputs that conform to the object detection format, the research team used ChatGPT to generate cross-modal instruction data to fine-tune the model. Specifically, based on 5000 coco images, they used ChatGPT to create a 30,000 cross-modal image-text fine-tuning dataset. To improve the efficiency of training, they fixed other model parameters and only learned cross-modal linear mapping. Experimental results show that even if only the linear layer is fine-tuned, the language model can understand fine-grained image features and follow specific patterns to perform inference-based image detection tasks, showing excellent performance. This research topic has great potential. Based on this technology, the field of home robots will further shine: people in homes can use abstract or coarse-grained voice instructions to make robots understand, recognize, and locate the objects they need, and provide relevant services. In the field of industrial robots, this technology will bring endless vitality: industrial robots can cooperate more naturally with human workers, accurately understand their instructions and needs, and achieve intelligent decision-making and operations. On the production line, human workers can use coarse-grained voice instructions or text input to allow robots to automatically understand, recognize, and locate the items that need to be processed, thereby improving production efficiency and quality. With object detection models that come with reasoning capabilities, we can develop more intelligent, natural, and efficient robots to provide more convenient, efficient, and humane services to humans. This is a field with broad prospects and deserves more attention and further exploration by more researchers. DetGPT supports multiple language models and has been validated based on two language models, Robin-13B and Vicuna-13B. The Robin series language model is a dialogue model trained by the LMFlow team ( https://github.com/OptimalScale/LMFlow) at the Hong Kong University of Science and Technology, achieving results competitive to Vicuna on multiple language ability evaluation benchmarks (model download: https://github.com/OptimalScale/LMFlow#model-zoo). Previously, the LMFlow team trained a vertical GPT model using a consumer-grade 3090 graphics card in just 5 hours. Today, this team, in collaboration with the NLP Group at the University of Hong Kong, has brought us a multimodal surprise. Welcome to try our demo and open-source code! Online demo: https://detgpt.github.io/ Open-source code: https://github.com/OptimalScale/DetGPT
  • Leaderboard for LLMs? [D]
    1 project | /r/MachineLearning | 9 May 2023
    Hi LMFlow Benchmark (https://github.com/OptimalScale/LMFlow) evaluates 31 open-source LLMs with an automatic metric: negative log likelihood.
  • [R] LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs
    3 projects | /r/MachineLearning | 9 May 2023
    LMFlow: https://github.com/OptimalScale/LMFlow
  • [R] Foundation Model Alignment with RAFT🛶 in LMFlow
    2 projects | /r/MachineLearning | 17 Apr 2023
    Its implementation is available from https://github.com/OptimalScale/LMFlow.
  • LMFlow – Toolkit for Finetuning and Inference of Large Foundation Models
    1 project | news.ycombinator.com | 13 Apr 2023

What are some alternatives?

When comparing LoRA and LMFlow you can also consider the following projects:

LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.

axolotl - Go ahead and axolotl questions

ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型

ControlNet - Let us control diffusion models!

chatgpt_macro_for_texstudio - The ChatGPT Macro for TeXstudio is a user-friendly integration that connects TeXstudio with OpenAI's API.

peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

llm-foundry - LLM training code for Databricks foundation models

alpaca-lora - Instruct-tune LLaMA on consumer hardware

const_layout - Official implementation of the MM'21 paper "Constrained Graphic Layout Generation via Latent Optimization" (LayoutGAN++, CLG-LO, and Layout evaluation)

LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

giskard - 🐢 Open-Source Evaluation & Testing framework for LLMs and ML models