Python Lora

Open-source Python projects categorized as Lora

Top 23 Python Lora Projects

  • LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)

    Project mention: ORPO, DPO, and PPO: Optimizing Models for Human Preferences | dev.to | 2024-11-08

    Implementation: ORPO has been integrated into popular fine-tuning libraries like TRL, Axolotl, and LLaMA-Factory.

  • CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  • unsloth

    Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory

    Project mention: Llama-3.3-70B-Instruct | news.ycombinator.com | 2024-12-06

    Hi,

    Yes you can. The community creates quantized variants of these that can run on consumer GPUs. A 4-bit quantization of LLAMA 70b works pretty well on Macbook pros, the neural engine with unified CPU memory is quite solid for these. GPUs is a bit tougher because consumer GPU RAM is still kinda small.

    You can also fine-tune them. There are lot of frameworks like unsloth that make this easier. https://github.com/unslothai/unsloth . Fine-tuning can be pretty tricky to get right, you need to be aware of things like learning rates, but there are good resources on the internet where a lot of hobbyists have gotten things working. You do not need a PhD in ML to accomplish this. You will, however, need data that you can represent textually.

    Source: Director of Engineering for model serving at Databricks.

  • Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

  • peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

    Project mention: LoftQ: LoRA-fine-tuning-aware Quantization | news.ycombinator.com | 2023-12-19
  • LoRA

    Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

    Project mention: Visually Multilingual: Introducing mcdse-2b | dev.to | 2024-10-27

    mcdse-2b is trained from MrLight/dse-qwen2-2b-mrl-v1 using low-rank adapters (LoRA) on a multilingual corpus of documents. I have trained it on 8xRTX3090 using the DSE approach with the following parameters:

  • LongLoRA

    Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)

    Project mention: Ask HN: AI/ML papers to catch up with current state of AI? | news.ycombinator.com | 2023-12-15

    LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307

  • xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • lorax

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Project mention: LoRAX: Hot swap LoRA adapters to serve many finetuned models concurrently | news.ycombinator.com | 2024-02-01
  • Reticulum

    The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.

    Project mention: A Simple open-source Phone programmable with Arduino | news.ycombinator.com | 2024-10-19

    And meybe integrated sound cable for baofeng/quansheng (K1<->USBA) USBA as powerbank and Packet radio or http://reticulum.network

  • OneTrainer

    OneTrainer is a one-stop solution for all your stable diffusion training needs.

    Project mention: Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer | dev.to | 2024-03-25

    Used SG161222/RealVisXL_V4.0 as a base model and OneTrainer to train on Windows 10 : https://github.com/Nerogar/OneTrainer

  • NomadNet

    Communicate Freely

    Project mention: Nomad, communicate off-grid mesh, forward secrecy and extreme privacy | news.ycombinator.com | 2024-08-15
  • aphrodite-engine

    Large-scale LLM inference engine

  • punica

    Serving multiple LoRA finetuned LLM as one

  • LLM-Finetuning-Toolkit

    Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.

    Project mention: Show HN: Toolkit for LLM Fine-Tuning, Ablating and Testing | news.ycombinator.com | 2024-04-07
  • Lora-for-Diffusers

    The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥

  • DoRA

    [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation (by NVlabs)

    Project mention: FLaNK-AIM Weekly 06 May 2024 | dev.to | 2024-05-06
  • LLaMA-LoRA-Tuner

    UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

  • Sideband

    LXMF client for Android, Linux and macOS allowing you to communicate with people or LXMF-compatible systems over Reticulum networks using LoRa, Packet Radio, WiFi, I2P, or anything else Reticulum supports.

    Project mention: Nomad, communicate off-grid mesh, forward secrecy and extreme privacy | news.ycombinator.com | 2024-08-15

    Reticulum is incredibly versatile and has an entire ecosystem of tools under development. NomadNet is just one of the messengers. There is Sideband, a mobile app client (https://github.com/markqvist/Sideband), and Reticulum MeshChat, developed by Liam Cottle which is a browser based client https://github.com/liamcottle/reticulum-meshchat.

    Reticulum can work over anything that has a throughput greater than 5 bits a second (yes, bits) and a MDU of 500 bytes. Not only can it work over hundreds of different carriers but each of these carriers can be apart of the same network.

    I threw together a quick proof of concept of it working over HF radio. I setup two nodes about 144 km (90 miles) separate. Both were ICOM-7300's with a Raspberry Pi 5 driving the software modem that would take packets from Reticulum and send them over the air. https://www.youtube.com/watch?v=blwNVumLujc

    Node 1 was out in the field while Node 2 was back at my house. Node 2 had two interfaces setup, one for the HF modem and another connected to the TCP testnet. This means that Node 1 could access any peer that was over on the TCP testnet.

    Here is a quick primer on Reticulum that explains some of the basic concepts: https://www.youtube.com/watch?v=q8ltLt5SK6A

  • BentoDiffusion

    BentoDiffusion: A collection of diffusion models served with BentoML

  • mLoRA

    An Efficient "Factory" to Build Multiple LoRA Adapters

  • RNode_Firmware

    RNode is an open, free and flexible digital radio interface with many uses

  • kohya-sd-scripts-webui

    Gradio wrapper for sd-scripts by kohya

  • LLaMA-8bit-LoRA

    Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python Lora discussion

Log in or Post with

Python Lora related posts

  • A Simple open-source Phone programmable with Arduino

    2 projects | news.ycombinator.com | 19 Oct 2024
  • Reticulum Is Unstoppable Networks for the People

    4 projects | news.ycombinator.com | 15 Aug 2024
  • Private, Secure and Uncensorable Messaging over a LoRa Mesh

    1 project | news.ycombinator.com | 15 Aug 2024
  • Nomad, communicate off-grid mesh, forward secrecy and extreme privacy

    10 projects | news.ycombinator.com | 15 Aug 2024
  • Reticulum Network Stack β – cryptography-based networking stack

    1 project | news.ycombinator.com | 15 Aug 2024
  • Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer

    1 project | dev.to | 25 Mar 2024
  • You can now train a 70B language model at home

    3 projects | news.ycombinator.com | 7 Mar 2024
  • A note from our sponsor - CodeRabbit
    coderabbit.ai | 10 Dec 2024
    Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →

Index

What are some of the best open-source Lora projects in Python? This list will help you:

Project Stars
1 LLaMA-Factory 35,732
2 unsloth 18,874
3 Chinese-LLaMA-Alpaca 18,466
4 peft 16,614
5 LoRA 10,890
6 LongLoRA 2,645
7 xTuring 2,618
8 lorax 2,235
9 Reticulum 2,116
10 OneTrainer 1,826
11 NomadNet 1,238
12 aphrodite-engine 1,159
13 punica 995
14 LLM-Finetuning-Toolkit 786
15 Lora-for-Diffusers 772
16 DoRA 658
17 LLaMA-LoRA-Tuner 448
18 Sideband 392
19 BentoDiffusion 340
20 mLoRA 280
21 RNode_Firmware 193
22 kohya-sd-scripts-webui 164
23 LLaMA-8bit-LoRA 147

Sponsored
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai

Did you konow that Python is
the 2nd most popular programming language
based on number of metions?