NCCL VS idist-snippets

Compare NCCL vs idist-snippets and see what are their differences.

NCCL

Optimized primitives for collective multi-GPU communication (by NVIDIA)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
NCCL idist-snippets
3 1
2,796 4
3.5% -
5.9 0.0
8 days ago almost 3 years ago
C++ Python
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

NCCL

Posts with mentions or reviews of NCCL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.
  • MPI jobs to test
    2 projects | /r/HPC | 6 Jun 2023
    % rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
  • NVLink and Dual 3090s
    1 project | /r/nvidia | 4 May 2022
    If it's rendering, you don't really need SLI, you need to install NCCL so that GPUs memory can be pooled: https://github.com/NVIDIA/nccl
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

idist-snippets

Posts with mentions or reviews of idist-snippets. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-08-10.

What are some alternatives?

When comparing NCCL and idist-snippets you can also consider the following projects:

gloo - Collective communications library with various primitives for multi-machine training.

ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

C++ Actor Framework - An Open Source Implementation of the Actor Model in C++

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

why-ignite - Why should we use PyTorch-Ignite ?

HPX - The C++ Standard Library for Parallelism and Concurrency

xla - Enabling PyTorch on XLA Devices (e.g. Google TPU)

ompi - Open MPI main development repository

Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot

Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System