Our great sponsors
cuml | cupy | |
---|---|---|
10 | 21 | |
3,881 | 7,753 | |
1.6% | 2.1% | |
9.3 | 9.9 | |
4 days ago | 3 days ago | |
C++ | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cuml
- FLaNK Stack Weekly for 13 November 2023
-
Is it possible to run Sklearn models on a GPU?
sklearn can't, bit take a look at cuML (https://github.com/rapidsai/cuml ). It uses the same API as sklearn but executes on GPU.
-
[P] Looking for state of the art clustering algorithms
As a companion to the other comments, I'd like to mention that the RAPIDS library cuML provides GPU-accelerated versions of quite a few of the algorithms mentioned in this thread (HDBSCAN, UMAP, SVM, PCA, {Exact, Approximate} Nearest Neighbors, DBSCAN, KMeans, etc.).
-
Is there a multi regression model that works on GPU?
CuML
- [D] What's your favorite unpopular/forgotten Machine Learning method?
- Machine Learning with PyTorch and Scikit-Learn – The *New* Python ML Book
-
What are the advantages and disadvantages of using GPU for machine learning/ deep learning/ scientific computation over the conventional CPU software acceleration?
Did they implement the clustering algorithm themselves? cuML is a GPU-accelerated scikit-learn-like package that covers many of the common ML algorithms.
-
Intel Extension for Scikit-Learn
https://github.com/rapidsai/cuml
> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
-
GPU Based Kernel-PCA
Cython code
-
Python Machine Learning Guy getting started with CUDA. What should I be brushing up on?
Take a look at RAPIDS CUML https://github.com/rapidsai/cuml. It's useful for most common ML algorithms. Feel free to create Github issues for feature requests & bugs.
cupy
- CuPy: NumPy and SciPy for GPU
-
Keras 3.0
I did not expect anything interesting, but this is actually cool.
> A full implementation of the NumPy API. Not something "NumPy-like" — just literally the NumPy API, with the same functions and the same arguments.
I suppose it's like https://cupy.dev/
- Progress on No-GIL CPython
-
Fedora 40 Eyes Dropping Gnome X11 Session Support
What was the difference in runtime performance, and did you try CuPy?
https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
Projects using CuPy:
-
How does one optimize their functions?
It's more effort though. You will likely have to format your data in specific ways for the GPU to efficiently process it. I've done this kind of thing with PyTorch tensors, but there are also math-specific libraries like CuPy. If you only have millions, Numpy should be fine.
-
Speed Up Your Physics Simulations (250x Faster Than NumPy) Using PyTorch. Episode 1: The Boltzmann Distribution
I'd also recommend checking out CuPy which aims to fully re-implement the Numpy api for CUDA GPUs, while taking advantage of Nvidia's specialized libraries like cuBLAS, cuRAND, cuSOLVER etc. The tradeoff being that it only works with Nvidia GPUs.
-
ELI5: Why doesn't numpy work on GPUs?
u/Spataner's answer is great. If you WANT GPU-enabled numpy functions, I would check out CuPy: https://cupy.dev/
-
Help!!! Training neural net in vs code
Not sure how VS Code is relevant here as it's just you IDE, shouldn't have any influence on this. Now, seeing as you're using numpy (which has no gpu support), you could try and use something like CuPy in place of numpy. I'm not sure about the interoperability because I've never used this myself, but if you're lucky it could be as simple as just replacing all numpy calls with the same CuPy calls (or replacing all import numpy as np with import cupy as np ).
-
What's the best thing/library you learned this year ?
Cupy replicates the numpy and scipy APIs but runs on the GPU.
-
Making Python fast for free – adventures with mypyc
For that, you can use cupy[0], PyTorch[1] or Tensorflow[2]. They all mimic the numpy's API with the possibility to use your GPU.
What are some alternatives?
scikit-learn - scikit-learn: machine learning in Python
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
scikit-learn-intelex - Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
Numba - NumPy aware dynamic Python compiler using LLVM
scikit-cuda - Python interface to GPU-powered libraries
hummingbird - Hummingbird compiles trained ML models into tensor computation for faster inference.
TensorFlow-object-detection-tutorial - The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
cudf - cuDF - GPU DataFrame Library
bottleneck - Fast NumPy array functions written in C
lightseq - LightSeq: A High Performance Library for Sequence Processing and Generation
Poetry - Python packaging and dependency management made easy