pathml
Tools for computational pathology (by Dana-Farber-AIOS)
cudf
cuDF - GPU DataFrame Library (by rapidsai)
pathml | cudf | |
---|---|---|
2 | 23 | |
364 | 7,440 | |
3.0% | 3.7% | |
8.0 | 9.9 | |
about 1 month ago | about 12 hours ago | |
Python | C++ | |
GNU General Public License v3.0 only | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pathml
Posts with mentions or reviews of pathml.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-11-17.
- Hilo Semanal de Consultas IT - Asesoría Técnica, Desarrollo Profesional y Aprendizaje
-
Dask – a flexible library for parallel computing in Python
We have been using dask to support our computational pathology workflows [1], where the images are so big that they cannot be loaded in memory, let alone analyzed (standard pathology whole slide images are ~1GB; some microscopy techniques generate images >1TB). We divide each image into a bunch of smaller tiles and process each tile independently. The dask.distributed scheduler lets us scale up by distributing the tile processing across a cluster.
Benefits of dask.distributed: easy to get up and running, and has support for spinning up clusters on lots of different computing platforms (local machines, HPC cluster, k8s, etc.)
One difficulty is optimizing performance - there are so many configuration details (job size, number of workers, worker resources, etc. etc.) that it's been hard to know what is best.
[1] https://github.com/Dana-Farber-AIOS/pathml
cudf
Posts with mentions or reviews of cudf.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-17.
-
A Polars exploration into Kedro
The interesting thing about Polars is that it does not try to be a drop-in replacement to pandas, like Dask, cuDF, or Modin, and instead has its own expressive API. Despite being a young project, it quickly got popular thanks to its easy installation process and its “lightning fast” performance.
-
Why we dropped Docker for Python environments
Perhaps the largest for package size is the NVIDIA developed rapids toolkit https://rapids.ai/ . Even still adding things like pandas and some geospatial tools, you rapidly end up with an image well over a gigabyte, despite following cutting edge best practice with docker and python.
-
Introducing TeaScript C++ Library
Yes sure, that is how OpenMP does; but on the other side: you seem to already do some basic type inference, and building an AST, no? Then you know as well the size and type of your vectors, and can execute actions in parallel if there is enough data to be worth parallelizing. Is there anyone who don't want their code to execute faster if it is possible? Those that do work in big data domain do use threads and vectorized instructions without user having to type in any directive; just import different library. Example, numpy or numpy with cuda backend, or similar GPU accelerated libraries like cudf.
-
[D] Can we use Ray for distributed training on vertex ai ? Can someone provide me examples for the same ? Also which dataframe libraries you guys used for training machine learning models on huge datasets (100 gb+) (because pandas can't handle huge data).
Not the answer about Ray: you could use rapids.ai. I'm using it for for dataframe manipulation on GPU
-
Story of my life
To put Data Analytics on GPU Steroids, Try RAPIDS cudf https://rapids.ai/
-
Artificial Intelligence in Python
You can scope out https://rapids.ai/. Nvidia's AI toolkits. They have some handy notebooks to poke at to get you started.
-
[D] [R] Large-scale clustering
try https://rapids.ai/
-
[P] Looking for state of the art clustering algorithms
As a companion to the other comments, I'd like to mention that the RAPIDS library cuML provides GPU-accelerated versions of quite a few of the algorithms mentioned in this thread (HDBSCAN, UMAP, SVM, PCA, {Exact, Approximate} Nearest Neighbors, DBSCAN, KMeans, etc.).
- Integrating multiple point clouds?
- Buka | Sains Data GPU RAPIDS