Python Compression

Open-source Python projects categorized as Compression

Top 23 Python Compression Projects

Compression
  1. DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

    Project mention: All Data and AI Weekly #193 - June 9, 2025 | dev.to | 2025-06-09
  2. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  3. PaddleNLP

    Easy-to-use and powerful LLM and SLM library with awesome model zoo.

  4. BorgBackup

    Deduplicating archiver with compression and authenticated encryption.

    Project mention: Self-hosting a Mastodon Instance on a Hetzner Server | dev.to | 2025-08-10

    Combine regular PostgreSQL dumps with BorgBackup to create encrypted, deduplicated archives.

  5. Crunch

    Insane(ly slow but wicked good) PNG image optimization (by chrissimpkins)

  6. aimet

    AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

  7. unblob

    Extract files from any kind of container formats

  8. llm-compressor

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Project mention: What is currently the best LLM model for consumer grade hardware? Is it phi-4? | news.ycombinator.com | 2025-05-30

    At 16GB a Q4 quant of Mistral Small 3.1, or Qwen3-14B at FP8, will probably serve you best. You'd be cutting it a little close on context length due to the VRAM usage... If you want longer context, a Q4 quant of Qwen3-14B will be a bit dumber than FP8 but will leave you more breathing room.

    Going below Q4 isn't worth it IMO.

    Since you're on a Blackwell chip, using LLMs quantized to NVFP4 specifically will provide some speed improvements at some quality cost compared to FP8 (and will be faster than Q4 GGUF, although ~equally dumb). Ollama doesn't support NVFP4 yet, so you'd need to use vLLM (which isn't too hard, and will give better token throughput anyway). Finding pre-quantized models at NVFP4 will be more difficult since there's less-broad support, but you can use llmcompressor [1] to statically compress any LLM to NVFP4 locally — you'll probably need to use accelerate to offload params to CPU during the one-time compression process, which they have documentation for.

    I wouldn't reach for this particular power tool until you've decided on an LLM already, and just want faster perf, since it's a bit more involved than just using ollama. But if you land on a Q4 model, it's not a bad choice.

    1: https://github.com/vllm-project/llm-compressor

  9. Sevalla

    Deploy and host your apps and databases, now with $50 credit! Sevalla is the PaaS you have been looking for! Advanced deployment pipelines, usage-based pricing, preview apps, templates, human support by developers, and much more!

    Sevalla logo
  10. Awesome-Efficient-LLM

    A curated list for Efficient Large Language Models

  11. model-optimization

    A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.

  12. gan-compression

    [CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs

  13. ratarmount

    Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives

    Project mention: Apache iceberg the Hadoop of the modern-data-stack? | news.ycombinator.com | 2025-03-06

    https://github.com/mxmlnkn/ratarmount

    > fsspec support:

    To use all fsspec features, either install via pip install ratarmount[fsspec] or pip install ratarmount[fsspec]. It should also suffice to simply pip install fsspec if ratarmountcore is already installed. The optional fsspec integration is threefold:

    Files can be specified on the command line via URLs pointing to remotes as explained in this section.

  14. nncf

    Neural Network Compression Framework for enhanced OpenVINO™ inference

  15. compression

    Data compression in TensorFlow (by tensorflow)

  16. refinery

    High Octane Triage Analysis (by binref)

  17. swin2sr

    [ECCV] Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration. Advances in Image Manipulation (AIM) workshop ECCV 2022. Try it out! over 3.3M runs https://replicate.com/mv-lab/swin2sr

  18. zipfly

    Python Zip Stream

  19. KVQuant

    [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

  20. pythonlibs

    A Python wrapper for the extremely fast Blosc compression library

  21. SecretPixel

    SecretPixel is a cutting-edge steganography tool designed to securely conceal sensitive information within images. It stands out in the realm of digital steganography by combining advanced encryption, compression, and a seeded Least Significant Bit (LSB) technique to provide a robust solution for embedding data undetectably.

  22. 3d-model-convert-to-gltf

    Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression

  23. picollm

    On-device LLM Inference Powered by X-Bit Quantization

  24. DictDataBase

    A python NoSQL dictionary database, with concurrent access and ACID compliance

  25. npbackup

    A secure and efficient file backup solution that fits both system administrators (CLI) and end users (GUI)

  26. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python Compression discussion

Log in or Post with

Python Compression related posts

  • Show HN: Yet Another Memory System for LLM's

    5 projects | news.ycombinator.com | 13 Aug 2025
  • AWS Restored My Account: The Human Who Made the Difference

    1 project | news.ycombinator.com | 7 Aug 2025
  • Chunking Attacks on File Backup Services Using Content-Defined Chunking [pdf]

    1 project | news.ycombinator.com | 22 Mar 2025
  • Archive-pdf-tools – library to create PDFs with MRC (Mixed Raster Content)

    1 project | news.ycombinator.com | 9 Mar 2025
  • Sutro Tower in 3D

    4 projects | news.ycombinator.com | 21 Feb 2025
  • Show HN: Ratarmount 1.0.0 – Rapid access to large archives via a FUSE filesystem

    2 projects | news.ycombinator.com | 1 Nov 2024
  • Ask HN: A better Criu Alternative for decompression software / Erlang?

    1 project | news.ycombinator.com | 15 Sep 2024
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 1 Sep 2025
    InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now. Learn more →

Index

What are some of the best open-source Compression projects in Python? This list will help you:

# Project Stars
1 DeepSpeed 39,922
2 PaddleNLP 12,752
3 BorgBackup 12,324
4 Crunch 3,404
5 aimet 2,437
6 unblob 2,350
7 llm-compressor 1,851
8 Awesome-Efficient-LLM 1,851
9 model-optimization 1,549
10 gan-compression 1,112
11 ratarmount 1,098
12 nncf 1,076
13 compression 894
14 refinery 766
15 swin2sr 645
16 zipfly 529
17 KVQuant 368
18 pythonlibs 357
19 SecretPixel 331
20 3d-model-convert-to-gltf 280
21 picollm 265
22 DictDataBase 245
23 npbackup 229

Sponsored
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?