airoboros VS gptqlora

Compare airoboros vs gptqlora and see what are their differences.

airoboros

Customizable implementation of the self-instruct paper. (by jondurbin)

gptqlora

GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ (by qwopqwop200)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
airoboros gptqlora
8 2
940 94
- -
8.7 7.6
about 2 months ago 11 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

airoboros

Posts with mentions or reviews of airoboros. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.

gptqlora

Posts with mentions or reviews of gptqlora. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-02.
  • (2/2) May 2023
    14 projects | /r/dailyainews | 2 Jun 2023
    GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ (https://github.com/qwopqwop200/gptqlora/tree/main)
  • GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
    1 project | /r/LocalLLaMA | 24 May 2023
    The difference from QLoRA is that GPTQ is used instead of NF4 (Normal Float4) + DQ (Double Quantization) for model quantization. The advantage is that you can expect better performance because it provides better quantization than conventional bitsandbytes. The downside is that it is a one-shot quantization methodology, so it is more inconvenient than bitsandbytes, and unlike bitsandbytes, it is not universal. I'm still experimenting, but it seems to work. At least, I hope it can be more options for people using LoRA. https://github.com/qwopqwop200/gptqlora/tree/main

What are some alternatives?

When comparing airoboros and gptqlora you can also consider the following projects:

WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath

tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%

TinyLlama - The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.

GirlfriendGPT - Girlfriend GPT is a Python project to build your own AI girlfriend using ChatGPT4.0

WizardVicunaLM - LLM that combines the principles of wizardLM and vicunaLM

chathub - All-in-one chatbot client

datablations - Scaling Data-Constrained Language Models

chain-of-thought-hub - Benchmarking large language models' complex reasoning ability with chain-of-thought prompting

guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]

gorilla - Gorilla: An API store for LLMs