dpctl
dpbench
dpctl | dpbench | |
---|---|---|
1 | 1 | |
95 | 17 | |
- | - | |
9.8 | 8.2 | |
4 days ago | 9 days ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dpctl
-
Data Parallel Extensions for Python: near-native speed for scientific computing
Considering how poorly it seems to support cuda as a backend [0], I wouldn't hold my breath about non intel vendor support (amd cpu or gpu). As for less common gpus, there really is no good support in any library. If you ever want to go down a fun rabbit hole, try to use the gpu in a raspberry pi for something. You'll eventually find one guy who reverse engineered the drivers to make a compiler but that's it.
[0] https://github.com/IntelPython/dpctl/discussions/1124
dpbench
What are some alternatives?
oneAPI-samples - Samples for IntelĀ® oneAPI Toolkits
Python-Complementary-Languages - Just a small test to see which language is better for extending python when using lists of lists
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
awkward - Manipulate JSON-like data with NumPy-like idioms.
dpnp - Data Parallel Extension for NumPy
dataiter - Python classes for data manipulation
oneMKL - oneAPI Math Kernel Library (oneMKL) Interfaces
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
eaminer - Heterogeneous Ethereum Miner with support for AMD, Intel and Nvidia GPUs using SYCL, OpenCL and CUDA backends