Propan VS qlora

Compare Propan vs qlora and see what are their differences.

Propan

Propan is a powerful and easy-to-use Python framework for building event-driven applications that interact with any MQ Broker (by Lancetnik)

qlora

QLoRA: Efficient Finetuning of Quantized LLMs (by artidoro)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Propan qlora
16 80
466 9,432
- -
8.8 7.4
about 1 month ago 7 months ago
Python Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Propan

Posts with mentions or reviews of Propan. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-16.
  • FastStream: Python's framework for Efficient Message Queue Handling
    11 projects | dev.to | 16 Oct 2023
    Later, we discovered Propan, a library created by Nikita Pastukhov, which solved similar problems but for RabbitMQ. Recognizing the potential for collaboration, we joined forces with Nikita to build a unified library that could work seamlessly with both Kafka and RabbitMQ. And that's how FastStream came to be—a solution born out of the need for simplicity and efficiency in microservices development.
  • How we deprecated two successful projects and joined forces to create an even more successful one
    3 projects | dev.to | 9 Oct 2023
    The next step was to figure out what to do next. We posted questions on a few relevant subreddits and got quite a few feature requests, mostly around supporting other protocols, encoding schemas etc. But, we also got a message from a developer of a similar framework Propan that was released at about the same time and was gaining quite a traction in the RabbitMQ community. That developer was Nikita Pastukhov and he made an intriguing proposal: let's join our efforts and create one framework with the best features of both. Both projects were growing at roughly the same speed but targeted different communities. So the potential for double growth was there. After a quick consideration, we realized there was not much to lose and there was a lot to gain. Of course, we would lose absolute control over the project but losing control to the community is the only way for an open-source project to succeed. On the positive side, we would gain a very skilled maintainer who single-handedly created a similar framework all by himself. The frameworks were conceptually very similar so we concluded there would not be much friction of ideas and we should be able to reach consensus on the most important design issues.
  • Introducing FastStream: the easiest way to write microservices for Apache Kafka and RabbitMQ in Python
    5 projects | /r/opensource | 29 Sep 2023
    FastStream simplifies the process of writing producers and consumers for message queues, handling all the parsing, networking and documentation generation automatically. It is a new package based on the ideas and experiences gained from FastKafka and Propan. By joining our forces, we picked up the best from both packages and created a unified way to write services capable of processing streamed data regardless of the underlying protocol. We'll continue to maintain both packages, but new development will be in this project.
  • FastStream: the easiest way to add Kafka and RabbitMQ support to FastAPI services
    4 projects | /r/FastAPI | 26 Sep 2023
    FastStream (https://github.com/airtai/faststream) is a new Python framework, born from Propan and FastKafka teams' collaboration (both are deprecated now). It extremely simplifies event-driven system development, handling all the parsing, networking, and documentation generation automatically. Now FastStream supports RabbitMQ and Kafka, but supported brokers are constantly growing (wait for NATS and Redis a bit). FastStream itself is a really great tool to build event-driven services. Also, it has a native FastAPI integration. Just create a StreamRouter (very close to APIRouter) and register event handlers the same with the regular HTTP-endpoints way:
  • Propan – Python Framework for building messaging services has a big update
    1 project | news.ycombinator.com | 31 Jul 2023
    Hello everyone!

    Two months ago I told you about Propan - the Python framework to build messaging services based on Any Message Broker. So, there were a lot of changes for this time and I want you to tell me again about them.

    At first, we added Kafka, Redis Pub/Sub, SQS, and NatsJS support (to RabbitMQ and regular NATS). At now you can interact with these brokers via the same Propan interfaces.

    Also, we added an AsyncAPI schema autogeneration, so you already have documentation for your services if you are using Propan.

    And the last (but not least) - PydanticV2 support! You can use V1 and V2 both, but V2 is much faster - it is a preferred way to write new services.

    By the way: we have a new Propan major version draft, so if you want to participate in the discussion and suggest a new feature, it is time to join our discord and tell about it!

    Propan: https://github.com/Lancetnik/Propan

  • Looking for Python contributors to a new Messaging Framework
    1 project | /r/opensource | 4 Jul 2023
  • Help wanted: support for PR
    2 projects | /r/FastAPI | 6 Jun 2023
    Also it is important for my own Propan package implementing some custom routers.
  • FLaNK Stack Weekly 29 may 2023
    19 projects | dev.to | 30 May 2023
  • Propan is a best way to interact SQS from Python
    1 project | /r/Python | 29 May 2023
    As you may know, I am developing Propan framework to interact with various message brokers single way. When I published a post about the existence of the framework, users immediately asked "When to expect SQS support?". Now!
  • Propan 0.1.2 - new way to interact with Kafka from Python
    1 project | /r/Python | 23 May 2023
    A couple of days ago I wrote about the release of my framework for working with various message brokers - Propan!

qlora

Posts with mentions or reviews of qlora. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-30.
  • FLaNK Stack Weekly for 30 Oct 2023
    24 projects | dev.to | 30 Oct 2023
  • I released Marx 3B V3.
    1 project | /r/LocalLLaMA | 25 Oct 2023
    Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
  • Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
    2 projects | news.ycombinator.com | 26 Jul 2023
    https://github.com/artidoro/qlora

    The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.

  • Yet another QLoRA tutorial
    2 projects | /r/LocalLLaMA | 24 Jul 2023
    My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
  • Creating a new Finetuned model
    3 projects | /r/LocalLLaMA | 11 Jul 2023
    Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
  • [R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
    2 projects | /r/MachineLearning | 4 Jul 2023
    4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
  • [D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
    2 projects | /r/MachineLearning | 3 Jul 2023
    There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
  • Finetune MPT-30B using QLORA
    2 projects | /r/LocalLLaMA | 3 Jul 2023
    This might be helpful: https://github.com/artidoro/qlora/issues/10
  • is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
    1 project | /r/LocalLLaMA | 29 Jun 2023
    curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
  • Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
    1 project | /r/LocalLLaMA | 29 Jun 2023
    This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]

What are some alternatives?

When comparing Propan and qlora you can also consider the following projects:

DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents

alpaca-lora - Instruct-tune LLaMA on consumer hardware

faststream - FastStream is a powerful and easy-to-use Python framework for building asynchronous services interacting with event streams such as Apache Kafka, RabbitMQ, NATS and Redis.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

kafka-native - Kafka broker compiled to native using Quarkus and GraalVM.

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

fastgron - High-performance JSON to GRON (greppable, flattened JSON) converter

ggml - Tensor library for machine learning

bunny-storm - RabbitMQ asynchronous connector library for Python with built in RPC support

alpaca_lora_4bit

FastDepends - FastDepends - FastAPI Dependency Injection system extracted from FastAPI and cleared of all HTTP logic. Async and sync modes are both supported.

llm-foundry - LLM training code for Databricks foundation models