NCCL VS gloo

Compare NCCL vs gloo and see what are their differences.

NCCL

Optimized primitives for collective multi-GPU communication (by NVIDIA)

gloo

Collective communications library with various primitives for multi-machine training. (by facebookincubator)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
NCCL gloo
3 2
2,744 1,132
3.5% 2.4%
5.9 8.1
10 days ago 7 days ago
C++ C++
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

NCCL

Posts with mentions or reviews of NCCL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.
  • MPI jobs to test
    2 projects | /r/HPC | 6 Jun 2023
    % rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

gloo

Posts with mentions or reviews of gloo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-10-24.
  • Releasing Gloo 0.4.0
    3 projects | /r/rust | 24 Oct 2021
    These are two separate libraries that do very different things but share the same name. They are also written in two separate languages. That is a sizable gap between them, and reusing names happens often with libraries. Gloo (rust-wasm, this post) is also not new. Though, relative to Gloo (Go, solo-io), it is newer. But, there is also a Github repo even older than Gloo (solo-io): https://github.com/facebookincubator/gloo. As well, even if these were for some odd reason all about wasm, none of them are actually that popular. solo-io Gloo has the most stars (though that isn't the best metric of popularity, since it is relative to the community that actually uses it), but 3k simply isn't that much. There is certainly a good argument to look down on libraries that reuse popular library names, but this isn't really the case here. Both started not too long after each other (solo-io would not have most of the stars it currently has when Gloo-Rust started), are in separate languages (thus separate communities), and do very separate things.
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

What are some alternatives?

When comparing NCCL and gloo you can also consider the following projects:

C++ Actor Framework - An Open Source Implementation of the Actor Model in C++

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

ompi - Open MPI main development repository

HPX - The C++ Standard Library for Parallelism and Concurrency

xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)

Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot

Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System

CUB - THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.

moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

laugh - Laughably simple yet effective Actor concurrency framework for C++20

ArrayFire - ArrayFire: a general purpose GPU library.

A C++14 library for executors - C++ library for executors