NCCL VS A C++14 library for executors

Compare NCCL vs A C++14 library for executors and see what are their differences.

NCCL

Optimized primitives for collective multi-GPU communication (by NVIDIA)

A C++14 library for executors

C++ library for executors (by chriskohlhoff)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
NCCL A C++14 library for executors
3 0
2,764 475
4.2% -
5.9 0.0
3 days ago over 7 years ago
C++ C++
GNU General Public License v3.0 or later Boost Software License 1.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

NCCL

Posts with mentions or reviews of NCCL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.
  • MPI jobs to test
    2 projects | /r/HPC | 6 Jun 2023
    % rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

A C++14 library for executors

Posts with mentions or reviews of A C++14 library for executors. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning A C++14 library for executors yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing NCCL and A C++14 library for executors you can also consider the following projects:

gloo - Collective communications library with various primitives for multi-machine training.

C++ Actor Framework - An Open Source Implementation of the Actor Model in C++

libdill - Structured concurrency in C

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

libmill - Go-style concurrency in C

Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System

HPX - The C++ Standard Library for Parallelism and Concurrency

continuable - C++14 asynchronous allocation aware futures (supporting then, exception handling, coroutines and connections)

xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)

Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot

moodycamel - A fast multi-producer, multi-consumer lock-free concurrent queue for C++11