qlora VS fairseq

Compare qlora vs fairseq and see what are their differences.

qlora

QLoRA: Efficient Finetuning of Quantized LLMs (by artidoro)

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python. (by facebookresearch)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
qlora fairseq
80 89
9,388 29,205
- 1.6%
7.4 6.6
7 months ago 6 days ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

qlora

Posts with mentions or reviews of qlora. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-30.
  • FLaNK Stack Weekly for 30 Oct 2023
    24 projects | dev.to | 30 Oct 2023
  • I released Marx 3B V3.
    1 project | /r/LocalLLaMA | 25 Oct 2023
    Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
  • Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
    2 projects | news.ycombinator.com | 26 Jul 2023
    https://github.com/artidoro/qlora

    The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.

  • Yet another QLoRA tutorial
    2 projects | /r/LocalLLaMA | 24 Jul 2023
    My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
  • Creating a new Finetuned model
    3 projects | /r/LocalLLaMA | 11 Jul 2023
    Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
  • [R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
    2 projects | /r/MachineLearning | 4 Jul 2023
    4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
  • [D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
    2 projects | /r/MachineLearning | 3 Jul 2023
    There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
  • Finetune MPT-30B using QLORA
    2 projects | /r/LocalLLaMA | 3 Jul 2023
    This might be helpful: https://github.com/artidoro/qlora/issues/10
  • is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
    1 project | /r/LocalLLaMA | 29 Jun 2023
    curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
  • Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
    1 project | /r/LocalLLaMA | 29 Jun 2023
    This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]

fairseq

Posts with mentions or reviews of fairseq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-03.
  • Sequence-to-Sequence Toolkit Written in Python
    1 project | news.ycombinator.com | 30 Mar 2024
  • Unsupervised (Semi-Supervised) ASR/STT training recipes
    2 projects | /r/deeplearning | 3 Nov 2023
  • Nvidia's 900 tons of GPU muscle bulks up server market, slims down wallets
    1 project | news.ycombinator.com | 19 Sep 2023
    > Is there really no way to partition the workload to run with 16gb memory per card?

    It really depends and this can get really complicated really fast. I'll give a tldr and then a longer explanation.

    TLDR:

    Yes, you can easily split networks up. If your main bottleneck is batch size (i.e. training) then there aren't huge differences in spreading across multiple GPUs assuming you have good interconnects (GPU direct is supported). If you're running inference and the model fits on the card you're probably fine too unless you need to do things like fancy inference batching (i.e. you have LOTS of users)

    Longer version:

    You can always split things up. If we think about networks we recognize some nice properties about how they operate as mathematical groups. Non-residual networks are compositional, meaning each layer can be treated as a sub network (every residual block can be treated this way too). Additionally, we may have associative and distributive properties depending on the architecture (some even have commutative!). So we can use these same rules to break apart networks in many different ways. There are often performance hits for doing this though, as it practically requires you touching the disk more often but in some more rare cases (at least to me, let me know if you know more) they can help.

    I mentioned the batching above and this can get kinda complicated. There are actually performance differences when you batch in groups of data (i.e. across GPUs) compared to batching on a single accelerator. This difference isn't talked about a lot. But it is going to come down to how often your algorithm depends on batching and what operations are used, such as batch norm. The batch norm is calculated across the GPU's batch, not the distributed batch (unless you introduce blocking). This is because your gradients AND inference are going to be computed differently. In DDP your whole network is cloned across cards so you basically run inference on multiple networks and then do an all reduce on the loss then calculate the gradient and then recopy the weights to all cards. There is even a bigger difference when you use lazy regularization (don't compute gradients for n-minibatches). GANs are notorious for using this and personally I've seen large benefits to distributed training for these. GANs usually have small batch sizes and aren't getting anywhere near the memory of the card anyways (GANs are typically unstable so large batch sizes can harm them), but also pay attention to this when evaluating papers (of course as well as how much hyper-parameter tuning has been done. This is always tricky when comparing works, especially between academia and big labs. You can easily be fooled by which is a better model. Evaluating models is way tougher than people give credit to and especially in the modern era of LLMs. I could rant a lot about just this alone). Basically in short, we can think of this as an ensembling method, except our models are actually identical (you could parallel reduce lazily too and that will create some periodic divergence between your models but that's not important for conceptually understanding, just worth noting).

    There is are also techniques to split a single model up called model sharding and checkpointing. Model sharding is where you split a single model across multiple GPUs. You're taking advantage of the compositional property of networks, meaning that as long as there isn't a residual layer between your split location you can actually treat one network as a series of smaller networks. This has obvious drawbacks as you need to feed one into another and so the operations have to be synchronous, but sometimes this isn't too bad. Checkpointing is very similar but you're just doing the same thing on the same GPU. Your hit here is in I/O, but may or may not be too bad with GPU Direct and highly depends on your model size (were you splitting because batch size or because model size?).

    This is all still pretty high level but if you want to dig into it more META developed a toolkit called fairseq that will do a lot of this for you and they optimized it

    https://engineering.fb.com/2021/07/15/open-source/fsdp/

    https://github.com/facebookresearch/fairseq

    TLDR: really depends on your use case, but it is a good question.

  • Talk back and forth with AI like you would with a person
    1 project | /r/singularity | 7 Jul 2023
    How do they do the text to voice conversion so fast? https://github.com/facebookresearch/fairseq/tree/main (open source takes sub-minute to do text to voice.
  • Voice generation AI (TTS)
    3 projects | /r/ArtificialInteligence | 1 Jul 2023
    It might be worth checking out Meta's TTS tho, I haven't gotten the chance to fiddle around with it but it looks somewhat promising https://github.com/facebookresearch/fairseq/tree/main/examples/mms
  • Translation app with TTS (text-to-speech) for Persian?
    2 projects | /r/machinetranslation | 24 Jun 2023
    They have instructions on how to use it in command line and a notebook on how to use it as a python library.
  • Why no work on open source TTS (Text to speech) models
    2 projects | /r/ArtificialInteligence | 20 Jun 2023
  • Meta's Massively Multilingual Speech project supports 1k languages using self supervised learning
    1 project | /r/DataCentricAI | 13 Jun 2023
    Github - https://github.com/facebookresearch/fairseq/tree/main/examples/mms Paper - https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/
  • AI — weekly megathread!
    2 projects | /r/artificial | 26 May 2023
    Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
  • Meta's MMS: Scaling Speech Technology to 1000+ languages (How to Run colab)
    1 project | /r/LanguageTechnology | 24 May 2023

What are some alternatives?

When comparing qlora and fairseq you can also consider the following projects:

alpaca-lora - Instruct-tune LLaMA on consumer hardware

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

ggml - Tensor library for machine learning

text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

alpaca_lora_4bit

espnet - End-to-End Speech Processing Toolkit

llm-foundry - LLM training code for Databricks foundation models

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration