rmm | cudf | |
---|---|---|
1 | 27 | |
492 | 8,433 | |
3.5% | 1.6% | |
9.2 | 9.9 | |
5 days ago | about 23 hours ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rmm
-
WSL2 CUDA/CUDF issue : Unable to establish a shared memory space between system and Vram
Specific issue: I am trying to use the RAPIDS/CUDF memory manager: https://github.com/rapidsai/rmm
cudf
-
Unleashing GPU Power: Supercharge Your Data Processing with cuDF
cuDF Documentation
-
This Week In Python
cudf – GPU DataFrame Library
- cuDF – GPU DataFrame Library
- CuDF – GPU DataFrame Library
-
A Polars exploration into Kedro
The interesting thing about Polars is that it does not try to be a drop-in replacement to pandas, like Dask, cuDF, or Modin, and instead has its own expressive API. Despite being a young project, it quickly got popular thanks to its easy installation process and its “lightning fast” performance.
-
Why we dropped Docker for Python environments
Perhaps the largest for package size is the NVIDIA developed rapids toolkit https://rapids.ai/ . Even still adding things like pandas and some geospatial tools, you rapidly end up with an image well over a gigabyte, despite following cutting edge best practice with docker and python.
-
Introducing TeaScript C++ Library
Yes sure, that is how OpenMP does; but on the other side: you seem to already do some basic type inference, and building an AST, no? Then you know as well the size and type of your vectors, and can execute actions in parallel if there is enough data to be worth parallelizing. Is there anyone who don't want their code to execute faster if it is possible? Those that do work in big data domain do use threads and vectorized instructions without user having to type in any directive; just import different library. Example, numpy or numpy with cuda backend, or similar GPU accelerated libraries like cudf.
-
[D] Can we use Ray for distributed training on vertex ai ? Can someone provide me examples for the same ? Also which dataframe libraries you guys used for training machine learning models on huge datasets (100 gb+) (because pandas can't handle huge data).
Not the answer about Ray: you could use rapids.ai. I'm using it for for dataframe manipulation on GPU
-
Story of my life
To put Data Analytics on GPU Steroids, Try RAPIDS cudf https://rapids.ai/
-
Artificial Intelligence in Python
You can scope out https://rapids.ai/. Nvidia's AI toolkits. They have some handy notebooks to poke at to get you started.
What are some alternatives?
cugraph - cuGraph - RAPIDS Graph Analytics Library
Numba - NumPy aware dynamic Python compiler using LLVM
Mesh - A memory allocator that automatically reduces the memory footprint of C/C++ applications.
chia-plotter
memory-allocators - Custom memory allocators in C++ to improve the performance of dynamic memory allocation
wif500 - Try to find the WIF key and get a donation 200 btc
tuninglib - A C++ Class and Template Library for Performance Critical Applications
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
larena - Yet another simple header only arena allocator for C11
mpire - A Python package for easy multiprocessing, but faster than multiprocessing
CUDA.jl - CUDA programming in Julia.
grcuda - Polyglot CUDA integration for the GraalVM