LoRA VS Awesome-LLM

Compare LoRA vs Awesome-LLM and see what are their differences.

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" (by microsoft)

Awesome-LLM

Awesome-LLM: a curated list of Large Language Model (by Hannibal046)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
LoRA Awesome-LLM
34 10
9,046 14,335
8.6% -
5.4 8.6
about 2 months ago 8 days ago
Python
MIT License Creative Commons Zero v1.0 Universal
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LoRA

Posts with mentions or reviews of LoRA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-08.
  • DECT NR+: A technical dive into non-cellular 5G
    1 project | news.ycombinator.com | 2 Apr 2024
    This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.
  • Training LLMs Taking Too Much Time? Technique you need to know to train it faster
    1 project | dev.to | 3 Mar 2024
    So to solve this, we tried researching into some optimization techniques and we found LoRA, Which stands for Low-Rank Adaptation of Large Language Models.
  • OpenAI employee: GPT-4.5 rumor was a hallucination
    1 project | news.ycombinator.com | 17 Dec 2023
    > Anyone have any ideas / knowledge on how they deploy little incremental fixes to exploited jailbreaks, etc?

    LoRa[1] would be my guess.

    For detailed explanation I recommend the paper. But the short explanation is that it is a trick which lets you train a smaller, lower dimensional model which when you add to the original model it gets you the result you want.

    1: https://arxiv.org/abs/2106.09685

  • Can a LoRa be used on models other than Stable Diffusion?
    2 projects | /r/StableDiffusion | 8 Dec 2023
    LoRA was initially developed for large language models, https://arxiv.org/abs/2106.09685 (2021). It was later that people discovered that it worked REALLY well for diffusion models.
  • StyleTTS2 – open-source Eleven Labs quality Text To Speech
    10 projects | news.ycombinator.com | 19 Nov 2023
    Curious if we'll see a Civitai-style LoRA[1] marketplace for text-to-speech models.

    1 = https://github.com/microsoft/LoRA

  • Andreessen Horowitz Invests in Civitai, Which Profits from Nonconsensual AI Porn
    1 project | news.ycombinator.com | 14 Nov 2023
    From https://arxiv.org/abs/2106.09685:

    > LoRA: Low-Rank Adaptation of Large Language Models

    > An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.

  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Yes, your understanding is correct. However, instead of adding a head on top of the network, most fine-tuning is currently done with LoRA (https://github.com/microsoft/LoRA). This introduces low-rank matrices between different layers of your models, those are then trained using your training data while the rest of the models' weights are frozen.
  • Run LLMs at home, BitTorrent‑style
    10 projects | news.ycombinator.com | 17 Sep 2023
    Somewhat yes. See "LoRA": https://arxiv.org/abs/2106.09685

    They're not composable in the sense that you can take these adaptation layers and arbitrarily combine them, but training different models while sharing a common base of weights is a solved problem.

  • New LoRa RF distance record: 1336 km / 830 mi
    1 project | news.ycombinator.com | 7 Sep 2023
    With all the naive AI zealotry on HN can you really fault me?

    They're referring to this:

    https://arxiv.org/abs/2106.09685

  • Open-source Fine-Tuning on Codebase with Refact
    2 projects | dev.to | 5 Sep 2023
    It's possible to fine-tune all parameters (called "full fine-tune"), but recently PEFT methods became popular. PEFT stands for Parameter-Efficient Fine-Tuning. There are several methods available, the most popular so far is LoRA (2106.09685) that can train less than 1% of the original weights. LoRA has one important parameter -- tensor size, called lora_r. It defines how much information LoRA can add to the network. If your codebase is small, the fine-tuning process will see the same data over and over again, many times in a loop. We found that for a smaller codebase small LoRA tensors work best because it won't overfit as much -- the tensors just don't have the capacity to fit the limited training set exactly. As the codebase gets bigger, tensors should become bigger as well. We also unfreeze token embeddings at a certain codebase size. To pick all the parameters automatically, we have developed a heuristic that calculates a score based on the source files it sees. This score is then used to determine the appropriate LoRA size, number of finetuning steps, and other parameters. We have tested this heuristic on several beta test clients, small codebases of several files, and large codebases like the Linux kernel (consisting of about 50,000 useful source files). If the heuristic doesn't work for you for whatever reason, you can set all the parameters yourself.

Awesome-LLM

Posts with mentions or reviews of Awesome-LLM. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-28.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    Here are some high level answers:

    "7B" refers to the number of parameters or weights for a model. For a specific model, the versions with more parameters take more compute power to train and perform better.

    A foundational model is the part of a ML model that is "pretrained" on a massive data set (and usually is the bulk of the compute cost). This is usually considered the "raw" model after which it is fine-tuned for specific tasks (turned into a chatbot).

    "8K length" refers to the Context Window length (in tokens). This is basically an LLM's short term memory - you can think of it as its attention span and what it can generate reasonable output for.

    "1.5T tokens" refers to the size of the corpus of the training set.

    In general Wikipedia (or I suppose ChatGPT 4/Bing Chat with Web Browsing) is a decent enough place to start reading/asking basic questions. I'd recommend starting here: https://en.wikipedia.org/wiki/Large_language_model and finding the related concepts.

    For those going deeper, there are lot of general resources lists like https://github.com/Hannibal046/Awesome-LLM or https://github.com/Mooler0410/LLMsPracticalGuide or one I like, https://sebastianraschka.com/blog/2023/llm-reading-list.html (there are a bajillion of these and you'll find more once you get a grasp on the terms you want to surf for). Almost everything is published on arXiv, and most is fairly readable even as a layman.

    For non-ML programmers looking to get up to speed, I feel like Karpathy's Zero to Hero/nanoGPT or Jay Mody's picoGPT https://jaykmody.com/blog/gpt-from-scratch/ are alternative/maybe a better way to understand the basic concepts on a practical level.

  • Couple of questions about a.i that can be run locally
    1 project | /r/ArtificialInteligence | 26 Jun 2023
  • How to dive deeper into LLMs?
    1 project | /r/LocalLLaMA | 21 Jun 2023
  • [Hiring] Developer to build AI-powered chatbots with open source LLMs
    1 project | /r/forhire | 15 Jun 2023
  • Creating a Wiki for all things Local LLM. What do you want to know?
    2 projects | /r/LocalLLaMA | 14 Jun 2023
    Check out this repo, there should be some useful things worth noting https://github.com/Hannibal046/Awesome-LLM
  • Large Language Model (LLM) Resources
    3 projects | /r/learnmachinelearning | 11 Jun 2023
  • Curated list for LLMs: papers, training frameworks, tools to deploy, public APIs
    1 project | news.ycombinator.com | 1 Jun 2023
  • Performance of GPT-4 vs PaLM 2
    9 projects | /r/singularity | 17 May 2023
    First this is a pretty good starting point as a resource for learning about and finding open source models and the overall public history of progress of LLMs.
  • FreedomGPT: AI with no censorship
    3 projects | /r/KotakuInAction | 12 May 2023
    This seems fishy as fuck. First red flag is a fishy installer instead of any huggingface link for the model. Upon further search I found this: https://desuarchive.org/g/thread/92686632/#92692092 There are posts in its own sub, r slash freedomgpt, raising concerns, and many new accounts with low karma replying to them(I don't think I can link other subs here, check them yourself), 100% some botting/astroturfing going on. Not touching this. Even in the best case scenario that this is legit with no funny business, this is supposed to be based on llama, which is substantially different tiny model(hence why it can be run on your computer at all). This is no Chatgpt equivalent eitherway. I would recommend getting something more reputable from github if you are interested in running LLMs yourself.
  • Ask HN: Foundational Papers in AI
    1 project | news.ycombinator.com | 4 May 2023
    https://github.com/Hannibal046/Awesome-LLM has a curated list of LLM specific resources.

    Not the creator, just happened upon it when researching LLMs today.

What are some alternatives?

When comparing LoRA and Awesome-LLM you can also consider the following projects:

LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]

ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

FreedomGPT - This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface

ControlNet - Let us control diffusion models!

LLMZoo - ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡

peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

dalai - The simplest way to run LLaMA on your local machine

alpaca-lora - Instruct-tune LLaMA on consumer hardware

langchain - 🦜🔗 Build context-aware reasoning applications

LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]