CUB VS NCCL

Compare CUB vs NCCL and see what are their differences.

CUB

THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE. (by NVlabs)

NCCL

Optimized primitives for collective multi-GPU communication (by NVIDIA)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
CUB NCCL
1 3
78 2,808
- 4.0%
2.7 5.8
2 months ago 3 days ago
Cuda C++
BSD 3-clause "New" or "Revised" License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

CUB

Posts with mentions or reviews of CUB. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-01-20.

NCCL

Posts with mentions or reviews of NCCL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.
  • MPI jobs to test
    2 projects | /r/HPC | 6 Jun 2023
    % rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
  • NVLink and Dual 3090s
    1 project | /r/nvidia | 4 May 2022
    If it's rendering, you don't really need SLI, you need to install NCCL so that GPUs memory can be pooled: https://github.com/NVIDIA/nccl
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

What are some alternatives?

When comparing CUB and NCCL you can also consider the following projects:

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

gloo - Collective communications library with various primitives for multi-machine training.

moderngpu - Patterns and behaviors for GPU computing

C++ Actor Framework - An Open Source Implementation of the Actor Model in C++

ArrayFire - ArrayFire: a general purpose GPU library.

readerwriterqueue - A fast single-producer, single-consumer lock-free queue for C++

HPX - The C++ Standard Library for Parallelism and Concurrency

libmill - Go-style concurrency in C

xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)

ck - Concurrency primitives, safe memory reclamation mechanisms and non-blocking (including lock-free) data structures designed to aid in the research, design and implementation of high performance concurrent systems developed in C99+.

Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot