ompi VS xla

Compare ompi vs xla and see what are their differences.

ompi

Open MPI main development repository (by open-mpi)

xla

Enabling PyTorch on XLA Devices (e.g. Google TPU) (by pytorch)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ompi xla
10 8
2,016 2,291
3.3% 2.7%
9.7 9.9
1 day ago 5 days ago
C C++
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ompi

Posts with mentions or reviews of ompi. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-09.
  • Ask HN: Does anyone care about OpenPOWER?
    2 projects | news.ycombinator.com | 9 Feb 2024
    The commercial Linux world (see https://github.com/open-mpi/ompi/issues/4349) and other open source OSes (eg FreeBSD) seem to have lined up behind little-endian PowerPC. IBM still has a big-endian problem with AIX, IBM i, and Linux on Z.
  • Announcing Chapel 1.32
    6 projects | news.ycombinator.com | 9 Oct 2023
    Roughly, the sets of computational problems that people used (use?) MPI for. Things like numerical solvers for sparse matrices that are so big that you need to split them across your entire cluster. These still require a lot of node-to-node communication, and on top of it, the pattern is dependent on each problem (so easy solutions like map-reduce are effectively out). See eg https://www.open-mpi.org/, and https://courses.csail.mit.edu/18.337/2005/book/Lecture_08-Do... for the prototypical use case.
  • How much are you meant to comment on a code?
    1 project | /r/AskProgramming | 11 May 2023
    One of the guys at the local LUG is one of the lead maintainers of Open MPI. He told us about a comment that ran into the hundreds of lines, all for a one-line change in the code.
  • Which license to choose when you want credit
    1 project | /r/github | 12 Mar 2023
    But it would be very inconvenient to have to keep crediting everyone who's ever worked on it. If you look at old projects, their licenses can have like 10-20 of those lines (here's one I was recently looking into).
  • First True Exascale Supercomputer
    2 projects | news.ycombinator.com | 6 Jul 2022
    I have a bit of experience programming for a highly-parallel supercomputer, specifically in my case an IBM BlueGene/Q. In that case, the answer is a lot of message passing (we used Open MPI [0]). Since the nodes are discrete and don't have any shared memory, you end up with something kinda reminiscent of the actor model as popularized by Erlang and co -- but in C for number-crunching performance.

    That said, each of the nodes is itself composed of multiple cores with shared memory. So in cases where you really want to grind out performance, you actually end up using message passing to divvy up chunks of work, and then use classic pthreads to parallelize things further, with lower latency.

    Debugging is a bit of a nightmare, though, since some bugs inevitably only come up once you have a large number of nodes running the algorithm in parallel. But you'll probably be in a mainframe-style time-sharing setup, so you may have to wait hours or more to rerun things.

    This applies less to some of the newer supercomputers, which are more or less clusters of GPUs instead of clusters of CPUs. I imagine there's some commonality, but I haven't worked with any of them so I can't really say.

    [0] https://www.open-mpi.org/

  • Managing parallelism by process vs by machine
    1 project | /r/ExperiencedDevs | 30 May 2022
  • MPI + CUDA Program for thermal conductivity problem
    2 projects | /r/CUDA | 4 May 2022
    I would suggest using OpenMPI because it's pretty easy to get started with. You can build OpenMPI with CUDA support, then you can pass device pointers directly to MPI_Send and MPI_Recv. Then you don't have to deal with transfers and synchronization issues.
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    backends from native torch distributed configuration: nccl, gloo, mpi.
  • FEA computer simulation question
    1 project | /r/buildapc | 23 Apr 2021
    I use a linux ubuntu machine with MPI (https://www.open-mpi.org/). I had a question on making my computer simulations faster. Would be better to get an older AMD 9590 machine clocked at 4.7 ghz or continue using my Ryzen 7 1700 machine clocked at something like 3.5ghz?
  • C Deep
    80 projects | dev.to | 27 Feb 2021
    OpenMPI - Message passing interface implementation. BSD-3-Clause

xla

Posts with mentions or reviews of xla. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-26.
  • Who uses Google TPUs for inference in production?
    1 project | news.ycombinator.com | 11 Mar 2024
    > The PyTorch/XLA Team at Google

    Meanwhile you have an issue from 5 years ago with 0 support

    https://github.com/pytorch/xla/issues/202

  • Google TPU v5p beats Nvidia H100
    2 projects | news.ycombinator.com | 26 Jan 2024
    PyTorch has had an XLA backend for years. I don't know how performant it is though. https://pytorch.org/xla
  • Why Did Google Brain Exist?
    2 projects | news.ycombinator.com | 26 Apr 2023
    It's curtains for XLA, to be precise. And PyTorch officially supports XLA backend nowadays too ([1]), which kind of makes JAX and PyTorch standing on the same foundation.

    1. https://github.com/pytorch/xla

  • Accelerating AI inference?
    4 projects | /r/tensorflow | 2 Mar 2023
    Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
  • [D] Colab TPU low performance
    2 projects | /r/MachineLearning | 18 Nov 2021
    While apparently TPUs can theoretically achieve great speedups, getting to the point where they beat a single GPU requires a lot of fiddling around and debugging. A specific setup is required to make it work properly. E.g., here it says that to exploit TPUs you might need a better CPU to keep the TPU busy, than the one in colab. The tutorials I looked at oversimplified the whole matter, the same goes for pytorch-lightning which implies switching to TPU is as easy as changing a single parameter. Furthermore, none of the tutorials I saw (even after specifically searching for that) went into detail about why and how to set up a GCS bucket for data loading.
  • How to train large deep learning models as a startup
    5 projects | news.ycombinator.com | 7 Oct 2021
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    XLA on TPUs via pytorch/xla.
  • [P] PyTorch for TensorFlow Users - A Minimal Diff
    1 project | /r/MachineLearning | 9 Mar 2021
    I don't know of any such trick except for using TensorFlow. In fact, I benchmarked PyTorch XLA vs TensorFlow and found that the former's performance was quite abysmal: PyTorch XLA is very slow on Google Colab. The developers' explanation, as I understood it, was that TF was using features not available to the PyTorch XLA developers and that they therefore could not compete on performance. The situation may be different today, I don't know really.

What are some alternatives?

When comparing ompi and xla you can also consider the following projects:

gloo - Collective communications library with various primitives for multi-machine training.

NCCL - Optimized primitives for collective multi-GPU communication

Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

why-ignite - Why should we use PyTorch-Ignite ?

FlatBuffers - FlatBuffers: Memory Efficient Serialization Library

pocketsphinx - A small speech recognizer

libvips - A fast image processing library with low memory needs.

ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

SWIFT - Modern astrophysics and cosmology particle-based code. Mirror of gitlab developments at https://gitlab.cosma.dur.ac.uk/swift/swiftsim