NCCL
Optimized primitives for collective multi-GPU communication (by NVIDIA)
CUB
THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE. (by NVlabs)
Our great sponsors
NCCL | CUB | |
---|---|---|
3 | 1 | |
2,764 | 77 | |
4.2% | - | |
5.9 | 2.7 | |
4 days ago | about 1 month ago | |
C++ | Cuda | |
GNU General Public License v3.0 or later | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
NCCL
Posts with mentions or reviews of NCCL.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-06.
-
MPI jobs to test
% rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
-
Distributed Training Made Easy with PyTorch-Ignite
backends from native torch distributed configuration: nccl, gloo, mpi.
CUB
Posts with mentions or reviews of CUB.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-20.
What are some alternatives?
When comparing NCCL and CUB you can also consider the following projects:
gloo - Collective communications library with various primitives for multi-machine training.
Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
C++ Actor Framework - An Open Source Implementation of the Actor Model in C++
moderngpu - Patterns and behaviors for GPU computing
ArrayFire - ArrayFire: a general purpose GPU library.
HPX - The C++ Standard Library for Parallelism and Concurrency
readerwriterqueue - A fast single-producer, single-consumer lock-free queue for C++
xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)
libmill - Go-style concurrency in C
Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot