Python Lora

Open-source Python projects categorized as Lora

Top 23 Python Lora Projects

  • Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

    Project mention: Chinese-Alpaca-Plus-13B-GPTQ | /r/LocalLLaMA | 2023-05-30

    I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.

  • LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

    Project mention: Show HN: GPU Prices on eBay | news.ycombinator.com | 2024-02-23

    Depends what model you want to train, and how well you want your computer to keep working while you're doing it.

    If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.

    You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.

    Spend a bit more and you'll probably have a better time.

    [1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

    Project mention: LoftQ: LoRA-fine-tuning-aware Quantization | news.ycombinator.com | 2023-12-19
  • LoRA

    Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

    Project mention: DECT NR+: A technical dive into non-cellular 5G | news.ycombinator.com | 2024-04-02

    This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.

  • xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

    Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07

    Explore the project on GitHub here.

  • LongLoRA

    Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)

    Project mention: Ask HN: AI/ML papers to catch up with current state of AI? | news.ycombinator.com | 2023-12-15

    LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307

  • Reticulum

    The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.

    Project mention: Meshtastic: An open source, off-grid, decentralized, mesh network | news.ycombinator.com | 2023-12-31

    Any views/comparison reagarding freakwan versus reticulum https://github.com/markqvist/Reticulum ?

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

  • onediff

    OneDiff: An out-of-the-box acceleration library for diffusion models.

    Project mention: Accelerating Stable Video Diffusion 3x Faster with OneDiff DeepCache and Int8 | news.ycombinator.com | 2024-01-29

    --output-video path/to/output_image.mp4

    Run with ComfyUI

    Run with OneDiff workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...

    Run with OneDiff + DeepCache workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...

    The use of Int8 can be referenced in the workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...

  • OneTrainer

    OneTrainer is a one-stop solution for all your stable diffusion training needs.

    Project mention: Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer | dev.to | 2024-03-25

    Used SG161222/RealVisXL_V4.0 as a base model and OneTrainer to train on Windows 10 : https://github.com/Nerogar/OneTrainer

  • punica

    Serving multiple LoRA finetuned LLM as one

    Project mention: Punica: Serving multiple LoRA finetuned LLM as one | news.ycombinator.com | 2023-11-08
  • Lora-for-Diffusers

    The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥

  • LLM-Finetuning-Toolkit

    Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.

    Project mention: Show HN: Toolkit for LLM Fine-Tuning, Ablating and Testing | news.ycombinator.com | 2024-04-07
  • NomadNet

    Communicate Freely

  • LLaMA-LoRA-Tuner

    UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

    Project mention: [P] Uptraining a pretrained model using company data? | /r/MachineLearning | 2023-05-25
  • OneDiffusion

    OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease

    Project mention: OneDiffusion | news.ycombinator.com | 2023-08-22
  • Sideband

    LXMF client for Android, Linux and macOS allowing you to communicate with people or LXMF-compatible systems over Reticulum networks using LoRa, Packet Radio, WiFi, I2P, or anything else Reticulum supports.

    Project mention: Meshtastic: An open source, off-grid, decentralized, mesh network | news.ycombinator.com | 2023-12-31

    yggdrasil can use WiFi on Android, I haven't tried it yet - https://yggdrasil-network.github.io/. yggdrasil gives you the ability to use TCP/IP applications over its mesh network but doesn't offer any end-user functionality itself.

    Manyverse can use WiFi for decentralised social networking - https://www.manyver.se/. They're currently in the middle of a rewrite of the backend and a protocol switch away from Secure Scuttlebutt to their own protocol currently named PPPPP.

    Reticulum/Sideband offers a P2P messaging system over WiFi or other mediums - https://github.com/markqvist/sideband

  • multi-lora-fine-tune

    Provide Efficient LLM Fine-Tune via Multi-LoRA Optimization

    Project mention: Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience? | /r/LocalLLaMA | 2023-12-06

    I want to train a Code LLaMA on some data, and I am looking for a Framework or Technique to train this on my PC with a 3090 Ti in it. In my research, I stumbled across the paper "ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU" https://arxiv.org/abs/2312.02515 with this GitHub project: https://github.com/TUDB-Labs/multi-lora-fine-tune.

  • kohya-sd-scripts-webui

    Gradio wrapper for sd-scripts by kohya

  • LLaMA-8bit-LoRA

    Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.

  • VisionCrafter

    Craft your visions

    Project mention: Fanmade Subreddit for the Github AI Video project VisionCrafter | /r/visioncrafter | 2023-08-09

    git clone https://github.com/diStyApps/VisionCrafter

  • RNode_Firmware

    Firmware for the RNode radio interface

  • lora-instruct

    Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA

    Project mention: Training a LoRA with MPT Models | /r/LocalLLaMA | 2023-05-09

    Hi, i have created custom data, same format as alphachas json file. And fine tuned mpt-7b-instruct using this link https://github.com/leehanchung/lora-instruct I have also used your patch, the fine tuning got successfull and also the loss got decreased but when am trying to make prediction using the fine tuned model am not getting correct output even on the trained data, it's generating output with lots of nonsense

  • Dreambooth

    Fine-tuning of diffusion models

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2024-04-07.

Python Lora related posts

Index

What are some of the best open-source Lora projects in Python? This list will help you:

Project Stars
1 Chinese-LLaMA-Alpaca 17,140
2 LLaMA-Factory 16,319
3 peft 13,670
4 LoRA 8,890
5 xTuring 2,510
6 LongLoRA 2,417
7 Reticulum 1,519
8 onediff 1,094
9 OneTrainer 1,076
10 punica 801
11 Lora-for-Diffusers 696
12 LLM-Finetuning-Toolkit 650
13 NomadNet 424
14 LLaMA-LoRA-Tuner 420
15 OneDiffusion 315
16 Sideband 222
17 multi-lora-fine-tune 172
18 kohya-sd-scripts-webui 167
19 LLaMA-8bit-LoRA 145
20 VisionCrafter 128
21 RNode_Firmware 123
22 lora-instruct 96
23 Dreambooth 94
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com