InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now. Learn more →
Top 23 Python Compression Projects
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
Combine regular PostgreSQL dumps with BorgBackup to create encrypted, deduplicated archives.
-
-
aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
-
-
llm-compressor
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Project mention: What is currently the best LLM model for consumer grade hardware? Is it phi-4? | news.ycombinator.com | 2025-05-30At 16GB a Q4 quant of Mistral Small 3.1, or Qwen3-14B at FP8, will probably serve you best. You'd be cutting it a little close on context length due to the VRAM usage... If you want longer context, a Q4 quant of Qwen3-14B will be a bit dumber than FP8 but will leave you more breathing room.
Going below Q4 isn't worth it IMO.
Since you're on a Blackwell chip, using LLMs quantized to NVFP4 specifically will provide some speed improvements at some quality cost compared to FP8 (and will be faster than Q4 GGUF, although ~equally dumb). Ollama doesn't support NVFP4 yet, so you'd need to use vLLM (which isn't too hard, and will give better token throughput anyway). Finding pre-quantized models at NVFP4 will be more difficult since there's less-broad support, but you can use llmcompressor [1] to statically compress any LLM to NVFP4 locally — you'll probably need to use accelerate to offload params to CPU during the one-time compression process, which they have documentation for.
I wouldn't reach for this particular power tool until you've decided on an LLM already, and just want faster perf, since it's a bit more involved than just using ollama. But if you land on a Q4 model, it's not a bad choice.
1: https://github.com/vllm-project/llm-compressor
-
Sevalla
Deploy and host your apps and databases, now with $50 credit! Sevalla is the PaaS you have been looking for! Advanced deployment pipelines, usage-based pricing, preview apps, templates, human support by developers, and much more!
-
-
model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
-
gan-compression
[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs
-
ratarmount
Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives
Project mention: Apache iceberg the Hadoop of the modern-data-stack? | news.ycombinator.com | 2025-03-06https://github.com/mxmlnkn/ratarmount
> fsspec support:
To use all fsspec features, either install via pip install ratarmount[fsspec] or pip install ratarmount[fsspec]. It should also suffice to simply pip install fsspec if ratarmountcore is already installed. The optional fsspec integration is threefold:
Files can be specified on the command line via URLs pointing to remotes as explained in this section.
-
-
-
-
swin2sr
[ECCV] Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration. Advances in Image Manipulation (AIM) workshop ECCV 2022. Try it out! over 3.3M runs https://replicate.com/mv-lab/swin2sr
-
-
KVQuant
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
-
-
SecretPixel
SecretPixel is a cutting-edge steganography tool designed to securely conceal sensitive information within images. It stands out in the realm of digital steganography by combining advanced encryption, compression, and a seeded Least Significant Bit (LSB) technique to provide a robust solution for embedding data undetectably.
-
-
-
-
npbackup
A secure and efficient file backup solution that fits both system administrators (CLI) and end users (GUI)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Python Compression discussion
Python Compression related posts
-
Show HN: Yet Another Memory System for LLM's
-
AWS Restored My Account: The Human Who Made the Difference
-
Chunking Attacks on File Backup Services Using Content-Defined Chunking [pdf]
-
Archive-pdf-tools – library to create PDFs with MRC (Mixed Raster Content)
-
Sutro Tower in 3D
-
Show HN: Ratarmount 1.0.0 – Rapid access to large archives via a FUSE filesystem
-
Ask HN: A better Criu Alternative for decompression software / Erlang?
-
A note from our sponsor - InfluxDB
www.influxdata.com | 1 Sep 2025
Index
What are some of the best open-source Compression projects in Python? This list will help you:
# | Project | Stars |
---|---|---|
1 | DeepSpeed | 39,922 |
2 | PaddleNLP | 12,752 |
3 | BorgBackup | 12,324 |
4 | Crunch | 3,404 |
5 | aimet | 2,437 |
6 | unblob | 2,350 |
7 | llm-compressor | 1,851 |
8 | Awesome-Efficient-LLM | 1,851 |
9 | model-optimization | 1,549 |
10 | gan-compression | 1,112 |
11 | ratarmount | 1,098 |
12 | nncf | 1,076 |
13 | compression | 894 |
14 | refinery | 766 |
15 | swin2sr | 645 |
16 | zipfly | 529 |
17 | KVQuant | 368 |
18 | pythonlibs | 357 |
19 | SecretPixel | 331 |
20 | 3d-model-convert-to-gltf | 280 |
21 | picollm | 265 |
22 | DictDataBase | 245 |
23 | npbackup | 229 |