xla VS mojo.vim

Compare xla vs mojo.vim and see what are their differences.

xla

Enabling PyTorch on XLA Devices (e.g. Google TPU) (by pytorch)

mojo.vim

Vim Syntax Highlighting for the Mojo programming language (by czheo)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
xla mojo.vim
8 3
2,296 23
1.7% -
9.9 4.5
5 days ago 26 days ago
C++ Vim Script
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

xla

Posts with mentions or reviews of xla. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-26.
  • Who uses Google TPUs for inference in production?
    1 project | news.ycombinator.com | 11 Mar 2024
    > The PyTorch/XLA Team at Google

    Meanwhile you have an issue from 5 years ago with 0 support

    https://github.com/pytorch/xla/issues/202

  • Google TPU v5p beats Nvidia H100
    2 projects | news.ycombinator.com | 26 Jan 2024
    PyTorch has had an XLA backend for years. I don't know how performant it is though. https://pytorch.org/xla
  • Why Did Google Brain Exist?
    2 projects | news.ycombinator.com | 26 Apr 2023
    It's curtains for XLA, to be precise. And PyTorch officially supports XLA backend nowadays too ([1]), which kind of makes JAX and PyTorch standing on the same foundation.

    1. https://github.com/pytorch/xla

  • Accelerating AI inference?
    4 projects | /r/tensorflow | 2 Mar 2023
    Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
  • [D] Colab TPU low performance
    2 projects | /r/MachineLearning | 18 Nov 2021
    While apparently TPUs can theoretically achieve great speedups, getting to the point where they beat a single GPU requires a lot of fiddling around and debugging. A specific setup is required to make it work properly. E.g., here it says that to exploit TPUs you might need a better CPU to keep the TPU busy, than the one in colab. The tutorials I looked at oversimplified the whole matter, the same goes for pytorch-lightning which implies switching to TPU is as easy as changing a single parameter. Furthermore, none of the tutorials I saw (even after specifically searching for that) went into detail about why and how to set up a GCS bucket for data loading.
  • How to train large deep learning models as a startup
    5 projects | news.ycombinator.com | 7 Oct 2021
  • Distributed Training Made Easy with PyTorch-Ignite
    7 projects | dev.to | 10 Aug 2021
    XLA on TPUs via pytorch/xla.
  • [P] PyTorch for TensorFlow Users - A Minimal Diff
    1 project | /r/MachineLearning | 9 Mar 2021
    I don't know of any such trick except for using TensorFlow. In fact, I benchmarked PyTorch XLA vs TensorFlow and found that the former's performance was quite abysmal: PyTorch XLA is very slow on Google Colab. The developers' explanation, as I understood it, was that TF was using features not available to the PyTorch XLA developers and that they therefore could not compete on performance. The situation may be different today, I don't know really.

mojo.vim

Posts with mentions or reviews of mojo.vim. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-26.
  • Mojo Standard Library Is Open Sourced
    1 project | news.ycombinator.com | 28 Mar 2024
  • Google TPU v5p beats Nvidia H100
    2 projects | news.ycombinator.com | 26 Jan 2024
    hmm the creator says (from his podcast with Lex Friedman when I listened to him) that they are open sourcing it, but that it is a project borne out of their private effort at their company and that it is still being used privately - so the aim is open sourcing it while taking community input and updating their private code to reflect the evolving design so that when they release it their internal lang and the open sourced lang will not diverge.

    of course not ideal, but better than "open sourcing" it and refusing every request because it does not work for their codebase. worse than having it open source from the get go, of course.

    assuming that day comes, does it have a competitor in the works? a python superset, compatible with python libs, but enables you to directly program GPUs and TPUs without CUDA or anything?

    "never" means you believe it will never be open sourced, or a competitor will surpass it by the time it is open sourced. or that you believe the premise of the lang is flawed and we don't need such a thing. Which one is it?

    Here is their github btw: https://github.com/modularml/mojo

    From what I see, they have a pretty active community and there is demand for such a system.

    The github says something similar:

    >This repo is the beginning of our Mojo open source effort. We've started with Mojo code examples and documentation, and we'll add the Mojo standard library as soon as we get the necessary infrastructure in place. The challenge is that we use Mojo pervasively inside Modular and we need to make sure that community contributions can proceed smoothly with good build and testing tools that will allow this repo to become the source of truth (right now it is not). We'll progressively add the necessary components, such as continuous integration, build tools, and more source code over time.

  • Vim Syntax Highlighting for the Mojo programming language
    1 project | /r/modular_mojo | 2 Jun 2023

What are some alternatives?

When comparing xla and mojo.vim you can also consider the following projects:

NCCL - Optimized primitives for collective multi-GPU communication

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

why-ignite - Why should we use PyTorch-Ignite ?

pocketsphinx - A small speech recognizer

ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

ompi - Open MPI main development repository

gloo - Collective communications library with various primitives for multi-machine training.

pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

idist-snippets

Megatron-LM - Ongoing research training transformer models at scale

determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.

tensorflow-nanoGPT - Example how to train GPT-2 (XLA + AMP), export to SavedModel and serve with Tensorflow Serving