CUB
NCCL
CUB | NCCL | |
---|---|---|
1 | 3 | |
78 | 3,170 | |
- | 2.5% | |
2.7 | 5.6 | |
8 months ago | 28 days ago | |
Cuda | C++ | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CUB
NCCL
-
MPI jobs to test
% rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
-
NVLink and Dual 3090s
If it's rendering, you don't really need SLI, you need to install NCCL so that GPUs memory can be pooled: https://github.com/NVIDIA/nccl
-
Distributed Training Made Easy with PyTorch-Ignite
backends from native torch distributed configuration: nccl, gloo, mpi.
What are some alternatives?
Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
gloo - Collective communications library with various primitives for multi-machine training.
moderngpu - Patterns and behaviors for GPU computing
C++ Actor Framework - An Open Source Implementation of the Actor Model in C++
ArrayFire - ArrayFire: a general purpose GPU library.
readerwriterqueue - A fast single-producer, single-consumer lock-free queue for C++
HPX - The C++ Standard Library for Parallelism and Concurrency
ck - Concurrency primitives, safe memory reclamation mechanisms and non-blocking (including lock-free) data structures designed to aid in the research, design and implementation of high performance concurrent systems developed in C99+.
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)
libmill - Go-style concurrency in C
Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot