pygfx
numexpr
pygfx | numexpr | |
---|---|---|
3 | 4 | |
357 | 2,143 | |
2.6% | 0.5% | |
8.8 | 8.2 | |
3 days ago | about 1 month ago | |
Python | Python | |
BSD 2-clause "Simplified" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pygfx
-
Emerging Rust GUI libraries in a WASM world
https://github.com/kushalkolar/fastplotlib
Alternatively, try pygfx for ThreeJS graphics in Python leveraging wgpu. It works great in Notebooks through notebook-rfb. https://github.com/pygfx/pygfx
If you're adventurous, figure out how to make pygfx work with webgpu via wasm
-
Chrome Ships WebGPU
FYI you can already use webgpu directly in python, see https://github.com/pygfx/wgpu-py for webgpu wrappers and https://github.com/pygfx/pygfx for a more high level graphics library
-
Extending Python with Rust
Rather than using matplotlib, you could try either pygfx (https://github.com/pygfx/pygfx) or fastplotlib (https://github.com/kushalkolar/fastplotlib) to make higher performance graphics using Python.
However, it won't solve your problem of Python not being fast enough doing the calculations.
numexpr
-
Making Python 100x faster with less than 100 lines of Rust
You can just slap numexpr on top of it to compile this line on the fly.
https://github.com/pydata/numexpr
- Extending Python with Rust
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
Are you doing any costly chained NumPy operations in your preprocessing? E.g. max(abs(large_ary)), this produces multiple copies of your data, https://github.com/pydata/numexpr can greatly reduce time spent with such operations
-
Selection in pandas using query
What is not entirely obvious here is that under the hood you can install a nice library called numexpr (docs, src) that exists to make calculations with large NumPy (and pandas) objects potentially much faster. When you use query or eval, this expression is passed into numexpr and optimized using its bag of tricks. Expected performance improvement can be between .95x and up to 20x, with average performance around 3-4x for typical use cases. You can read details in the docs, but essentially numexpr takes vectorized operations and makes them work in chunks that optimize for cache and CPU branch prediction. If your arrays are really large, your cache will not be hit as often. If you break your large arrays into very small pieces, your CPU won’t be as efficient.
What are some alternatives?
SHA256-WebGPU - Implementation of sha256 in WGSL
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
graphics_wgpu
greptimedb - An open-source, cloud-native, distributed time-series database with PromQL/SQL/Python supported. Available on GreptimeCloud.
vswhere - Locate Visual Studio 2017 and newer installations
jnumpy - Writing Python C extensions in Julia within 5 minutes.
fastplotlib - Next-gen fast plotting library running on WGPU using the pygfx rendering engine
jsmpeg - MPEG1 Video Decoder in JavaScript
three.py - Python 3D library based on three.js and Modern OpenGL
poly-match - Source for the "Making Python 100x faster with less than 100 lines of Rust" blog post
egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native
ruff - An extremely fast Python linter and code formatter, written in Rust.