SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Python Lora Projects
-
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
-
Depends what model you want to train, and how well you want your computer to keep working while you're doing it.
If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.
You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.
Spend a bit more and you'll probably have a better time.
[1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
-
Project mention: DECT NR+: A technical dive into non-cellular 5G | news.ycombinator.com | 2024-04-02
This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.
-
xTuring
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07Explore the project on GitHub here.
-
Project mention: Ask HN: AI/ML papers to catch up with current state of AI? | news.ycombinator.com | 2023-12-15
LongAlpaca / One of many ways to extend context, and a useful dataset / https://arxiv.org/abs/2309.12307
-
Reticulum
The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.
Project mention: Meshtastic: An open source, off-grid, decentralized, mesh network | news.ycombinator.com | 2023-12-31Any views/comparison reagarding freakwan versus reticulum https://github.com/markqvist/Reticulum ?
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Project mention: Accelerating Stable Video Diffusion 3x Faster with OneDiff DeepCache and Int8 | news.ycombinator.com | 2024-01-29
--output-video path/to/output_image.mp4
Run with ComfyUI
Run with OneDiff workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
Run with OneDiff + DeepCache workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
The use of Int8 can be referenced in the workflow: https://github.com/siliconflow/onediff/blob/main/onediff_com...
-
Project mention: Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer | dev.to | 2024-03-25
Used SG161222/RealVisXL_V4.0 as a base model and OneTrainer to train on Windows 10 : https://github.com/Nerogar/OneTrainer
-
Project mention: Punica: Serving multiple LoRA finetuned LLM as one | news.ycombinator.com | 2023-11-08
-
Lora-for-Diffusers
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
-
Project mention: Show HN: Toolkit for LLM Fine-Tuning, Ablating and Testing | news.ycombinator.com | 2024-04-07
-
-
LLaMA-LoRA-Tuner
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
Project mention: [P] Uptraining a pretrained model using company data? | /r/MachineLearning | 2023-05-25 -
-
Sideband
LXMF client for Android, Linux and macOS allowing you to communicate with people or LXMF-compatible systems over Reticulum networks using LoRa, Packet Radio, WiFi, I2P, or anything else Reticulum supports.
Project mention: Meshtastic: An open source, off-grid, decentralized, mesh network | news.ycombinator.com | 2023-12-31yggdrasil can use WiFi on Android, I haven't tried it yet - https://yggdrasil-network.github.io/. yggdrasil gives you the ability to use TCP/IP applications over its mesh network but doesn't offer any end-user functionality itself.
Manyverse can use WiFi for decentralised social networking - https://www.manyver.se/. They're currently in the middle of a rewrite of the backend and a protocol switch away from Secure Scuttlebutt to their own protocol currently named PPPPP.
Reticulum/Sideband offers a P2P messaging system over WiFi or other mediums - https://github.com/markqvist/sideband
-
Project mention: Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience? | /r/LocalLLaMA | 2023-12-06
I want to train a Code LLaMA on some data, and I am looking for a Framework or Technique to train this on my PC with a 3090 Ti in it. In my research, I stumbled across the paper "ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU" https://arxiv.org/abs/2312.02515 with this GitHub project: https://github.com/TUDB-Labs/multi-lora-fine-tune.
-
-
LLaMA-8bit-LoRA
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
-
Project mention: Fanmade Subreddit for the Github AI Video project VisionCrafter | /r/visioncrafter | 2023-08-09
git clone https://github.com/diStyApps/VisionCrafter
-
-
Hi, i have created custom data, same format as alphachas json file. And fine tuned mpt-7b-instruct using this link https://github.com/leehanchung/lora-instruct I have also used your patch, the fine tuning got successfull and also the loss got decreased but when am trying to make prediction using the fine tuned model am not getting correct output even on the trained data, it's generating output with lots of nonsense
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Python Lora related posts
- Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer
- You can now train a 70B language model at home
- Reticulum: An encrypted mesh network stack
- Aurelian: 70B 32K story-writing (and more) [Alpha]
- Need explanation with training
- Why train on Yi 4K instead of 200K?
- A concept for all-in-one electronics
-
A note from our sponsor - SaaSHub
www.saashub.com | 18 Apr 2024
Index
What are some of the best open-source Lora projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | Chinese-LLaMA-Alpaca | 17,140 |
2 | LLaMA-Factory | 16,319 |
3 | peft | 13,670 |
4 | LoRA | 8,890 |
5 | xTuring | 2,510 |
6 | LongLoRA | 2,417 |
7 | Reticulum | 1,519 |
8 | onediff | 1,094 |
9 | OneTrainer | 1,076 |
10 | punica | 801 |
11 | Lora-for-Diffusers | 696 |
12 | LLM-Finetuning-Toolkit | 650 |
13 | NomadNet | 424 |
14 | LLaMA-LoRA-Tuner | 420 |
15 | OneDiffusion | 315 |
16 | Sideband | 222 |
17 | multi-lora-fine-tune | 172 |
18 | kohya-sd-scripts-webui | 167 |
19 | LLaMA-8bit-LoRA | 145 |
20 | VisionCrafter | 128 |
21 | RNode_Firmware | 123 |
22 | lora-instruct | 96 |
23 | Dreambooth | 94 |