LoRA: Low-Rank Adaptation of Large Language Models

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • LoRA

    Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

  • peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

  • https://github.com/huggingface/peft

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • Introducing .NET Multi-platform App UI (MAUI)

    .NET MAUI is the .NET Multi-platform App UI, a framework for building native device applications spanning mobile, tablet, and desktop.

  • Microsoft has done this before with mauikit and mauilinux: https://github.com/dotnet/maui/issues/35

    Unlikely that they even consider checking whether they are stomping across existing names.

  • alpaca-lora

    Instruct-tune LLaMA on consumer hardware

  • For those wondering why this is interesting: This technique is being used to reproduce[0] (though unclear exactly with what fidelity) the Alpaca results from Stanford[1] with a few hours of training on consumer-grade hardware.

    I believe that there will be a cottage industry of providing application-specific fine-tuned models like this, that can run in e.g. AWS very inexpensively. The barrier today seems to be that the base model (here, LLaMa) is encumbered and can't be used commercially. Someone will soon, I'm confident, release e.g. an MIT-licensed equivalent and we'll all be off to the races.

    [0] https://github.com/tloen/alpaca-lora

  • LyCORIS

    Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.

  • There are some WIP evolutions of SD Lora in the works, like locon and lycoris.

    https://github.com/KohakuBlueleaf/LyCORIS

  • alpaca_lora_4bit

  • Modern peft methods with LoRA actually do reduce training time by orders of magnitude.

    Here's an example of 20 seconds per epoch on a single consumer GPU: https://github.com/johnsmith0031/alpaca_lora_4bit/issues/7#i...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts