NCCL VS xla

Compare NCCL vs xla and see what are their differences.

NCCL

Optimized primitives for collective multi-GPU communication (by NVIDIA)

xla

Enabling PyTorch on XLA Devices (e.g. Google TPU) (by pytorch)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
NCCL xla
3 8
2,796 2,285
3.5% 2.4%
5.9 9.9
3 days ago 2 days ago
C++ C++
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

NCCL

Posts with mentions or reviews of NCCL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.
  • MPI jobs to test
    2 projects | /r/HPC | 6 Jun 2023
    % rm -rf /tmp/nccl ; git clone --recursive https://github.com/NVIDIA/nccl.git ; cd nccl ; git grep MPI Cloning into 'nccl'... remote: Enumerating objects: 2769, done. remote: Counting objects: 100% (336/336), done. remote: Compressing objects: 100% (140/140), done. remote: Total 2769 (delta 201), reused 287 (delta 196), pack-reused 2433 Receiving objects: 100% (2769/2769), 3.04 MiB | 3.37 MiB/s, done. Resolving deltas: 100% (1820/1820), done. README.md:NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications. src/collectives/broadcast.cc:/* Deprecated original "in place" function, similar to MPI */
  • NVLink and Dual 3090s
    1 project | /r/nvidia | 4 May 2022
    If it's rendering, you don't really need SLI, you need to install NCCL so that GPUs memory can be pooled: https://github.com/NVIDIA/nccl
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.

xla

Posts with mentions or reviews of xla. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-26.
  • Who uses Google TPUs for inference in production?
    1 project | news.ycombinator.com | 11 Mar 2024
    > The PyTorch/XLA Team at Google

    Meanwhile you have an issue from 5 years ago with 0 support

    https://github.com/pytorch/xla/issues/202

  • Google TPU v5p beats Nvidia H100
    2 projects | news.ycombinator.com | 26 Jan 2024
    PyTorch has had an XLA backend for years. I don't know how performant it is though. https://pytorch.org/xla
  • Why Did Google Brain Exist?
    2 projects | news.ycombinator.com | 26 Apr 2023
    It's curtains for XLA, to be precise. And PyTorch officially supports XLA backend nowadays too ([1]), which kind of makes JAX and PyTorch standing on the same foundation.

    1. https://github.com/pytorch/xla

  • Accelerating AI inference?
    4 projects | /r/tensorflow | 2 Mar 2023
    Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
  • [D] Colab TPU low performance
    2 projects | /r/MachineLearning | 18 Nov 2021
    While apparently TPUs can theoretically achieve great speedups, getting to the point where they beat a single GPU requires a lot of fiddling around and debugging. A specific setup is required to make it work properly. E.g., here it says that to exploit TPUs you might need a better CPU to keep the TPU busy, than the one in colab. The tutorials I looked at oversimplified the whole matter, the same goes for pytorch-lightning which implies switching to TPU is as easy as changing a single parameter. Furthermore, none of the tutorials I saw (even after specifically searching for that) went into detail about why and how to set up a GCS bucket for data loading.
  • How to train large deep learning models as a startup
    5 projects | news.ycombinator.com | 7 Oct 2021
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    XLA on TPUs via pytorch/xla.
  • [P] PyTorch for TensorFlow Users - A Minimal Diff
    1 project | /r/MachineLearning | 9 Mar 2021
    I don't know of any such trick except for using TensorFlow. In fact, I benchmarked PyTorch XLA vs TensorFlow and found that the former's performance was quite abysmal: PyTorch XLA is very slow on Google Colab. The developers' explanation, as I understood it, was that TF was using features not available to the PyTorch XLA developers and that they therefore could not compete on performance. The situation may be different today, I don't know really.

What are some alternatives?

When comparing NCCL and xla you can also consider the following projects:

gloo - Collective communications library with various primitives for multi-machine training.

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

C++ Actor Framework - An Open Source Implementation of the Actor Model in C++

pocketsphinx - A small speech recognizer

Thrust - [ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

why-ignite - Why should we use PyTorch-Ignite ?

HPX - The C++ Standard Library for Parallelism and Concurrency

ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Easy Creation of GnuPlot Scripts from C++ - A simple C++17 lib that helps you to quickly plot your data with GnuPlot

ompi - Open MPI main development repository

Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System