nanobind
cupy
nanobind | cupy | |
---|---|---|
11 | 21 | |
2,042 | 7,787 | |
- | 1.0% | |
9.6 | 9.9 | |
3 days ago | about 10 hours ago | |
C++ | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nanobind
-
Progress on No-GIL CPython
Take a look at https://github.com/wjakob/nanobind
> More concretely, benchmarks show up to ~4× faster compile time, ~5× smaller binaries, and ~10× lower runtime overheads compared to pybind11.
-
Advanced Python Mastery – A Course by David Beazley
People should not take that an endorsement of Swig.
Please use ctypes, cffi or https://github.com/wjakob/nanobind
Beazley himself is amazed that it (Swig) is still in use.
- Swig – Connect C/C++ programs with high-level programming languages
- Nanobind: Tiny and efficient C++/Python bindings
-
Create Python bindings for my C++ code with PyBind11
Nanobind made by the creator of PyBind11, it has a similar interface, but it takes leverage of C++17 and it aims to have more efficient bindings in space and speed.
- Nanobind – Seamless operability between C++17 and Python
-
Cython Is 20
I would recommend using NanoBind, the follow up of PyBind11 by the same author (Wensel Jakob), and move as much performance critical code to C or C++. https://github.com/wjakob/nanobind
If you really care about performance called from Python, consider something like NVIDIA Warp (Preview). Warp jits and runs your code on CUDA or CPU. Although Warp targets physics simulation, geometry processing, and procedural animation, it can be used for other tasks as well. https://github.com/NVIDIA/warp
Jax is another option, by Google, jitting and vectorizing code for TPU, GPU or CPU. https://github.com/google/jax
- GitHub - wjakob/nanobind: nanobind — Seamless operability between C++17 and Python
cupy
- CuPy: NumPy and SciPy for GPU
-
Keras 3.0
I did not expect anything interesting, but this is actually cool.
> A full implementation of the NumPy API. Not something "NumPy-like" — just literally the NumPy API, with the same functions and the same arguments.
I suppose it's like https://cupy.dev/
- Progress on No-GIL CPython
-
Fedora 40 Eyes Dropping Gnome X11 Session Support
What was the difference in runtime performance, and did you try CuPy?
https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
Projects using CuPy:
-
How does one optimize their functions?
It's more effort though. You will likely have to format your data in specific ways for the GPU to efficiently process it. I've done this kind of thing with PyTorch tensors, but there are also math-specific libraries like CuPy. If you only have millions, Numpy should be fine.
-
Speed Up Your Physics Simulations (250x Faster Than NumPy) Using PyTorch. Episode 1: The Boltzmann Distribution
I'd also recommend checking out CuPy which aims to fully re-implement the Numpy api for CUDA GPUs, while taking advantage of Nvidia's specialized libraries like cuBLAS, cuRAND, cuSOLVER etc. The tradeoff being that it only works with Nvidia GPUs.
-
ELI5: Why doesn't numpy work on GPUs?
u/Spataner's answer is great. If you WANT GPU-enabled numpy functions, I would check out CuPy: https://cupy.dev/
-
Help!!! Training neural net in vs code
Not sure how VS Code is relevant here as it's just you IDE, shouldn't have any influence on this. Now, seeing as you're using numpy (which has no gpu support), you could try and use something like CuPy in place of numpy. I'm not sure about the interoperability because I've never used this myself, but if you're lucky it could be as simple as just replacing all numpy calls with the same CuPy calls (or replacing all import numpy as np with import cupy as np ).
-
What's the best thing/library you learned this year ?
Cupy replicates the numpy and scipy APIs but runs on the GPU.
-
Making Python fast for free – adventures with mypyc
For that, you can use cupy[0], PyTorch[1] or Tensorflow[2]. They all mimic the numpy's API with the possibility to use your GPU.
[0] https://cupy.dev/
What are some alternatives?
pybind11 - Seamless operability between C++11 and Python
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
awesome-cython - A curated list of awesome Cython resources. Just a draft for now.
Numba - NumPy aware dynamic Python compiler using LLVM
Nuitka - Nuitka is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11. You feed it your Python app, it does a lot of clever things, and spits out an executable or extension module.
scikit-cuda - Python interface to GPU-powered libraries
matplotlibcpp17 - Alternative to matplotlibcpp with better syntax, based on pybind
TensorFlow-object-detection-tutorial - The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
epython - EPython is a typed-subset of the Python for extending the language new builtin types and methods
bottleneck - Fast NumPy array functions written in C
avendish - declarative polyamorous cross-system intermedia objects
dpnp - Data Parallel Extension for NumPy