PyTorch on Apple M1 Faster Than TensorFlow-Metal

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • mmperf

    MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.

  • Here are the matmul sizes for the MiniLM model used for inference: https://github.com/mmperf/mmperf/blob/main/benchmark_sizes/b...

    These are the matmul sizes for the BERT training workload https://github.com/mmperf/mmperf/blob/main/benchmark_sizes/b...

    Yes we use the latest MoltenVK (1.3.204.0) installed in the system.

    I will let @noxa and other IREE devs chime in on the SPIR-V path but we do support prefix sums etc in the GPU path.

    //part of nod.ai team.

  • cutlass

    CUDA Templates for Linear Algebra Subroutines

  • So with Tensorcores you use TF32 which is more like FP19-ish and the marketing makes you think you get 8x the performance. But if you want actual FP32 precision you will need something like [1] but then your performance in the Tensorcore path is _only_ 2X faster than the SIMT path.

    I'll leave the prefix sum for other devs who know more :D

    https://github.com/NVIDIA/cutlass/blob/master/examples/27_am...

    //part of nod.ai/shark team

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • iree

    A retargetable MLIR-based machine learning compiler and runtime toolkit.

  • Exactly the kind of things we've been talking about! A fun and challenging tradeoff space and it's always great to connect with others!

    Ahh linebender - I hadn't connected the name with your github account - piet-gpu is great, as is your blog! Also, for anyone skimming the comments this talk is fantastic and I share it with anyone new to the GPGPU space: https://www.youtube.com/watch?v=DZRn_jNZjbw

    We waffled a bit with the API granularity in the beginning and it's taken building out most of the rest of the project in order to nail it down (the big refactor still pending). The biggest issue is that in simple models we'll end up emitting a single command buffer but anything with control flow (that we can't predicate), data dependencies (sparsity, thresholding, etc), or CPU work in the middle (IO, custom user code, etc) can break that up. We also hit cases where we need to flush work - such as if we run out of usable memory and need to defragment or resize our pools. We want to be able to (but aren't yet) reuse command buffers (CUDA graphs, etc) and that requires being able to both cache them and recreate them on demand (if we resize a pool we have to invalidate all cached command buffers using those resources, as update-after-bind is not universally available and if shapes change there's big ripples). Since most models beyond simple vision ones are ~thousands of dispatches it also lets us better integrate into multithreaded applications like you mention as apps can record commands for themselves in parallel without synchronization. It still would be nice to have certain operations inlined, though, and for that we want to allow custom hooks that we call into to add commands to the command buffers, turning things inside-out to make small amounts of work like image transformations in-between model layers possible (I'm really hoping we can avoid modeling the entire graphics pipeline in the compiler and this would be a way around that :). We haven't yet started on scheduling across queues but that's also very interesting especially in multi-GPU cases (with x4/x8 GPUs being common in datacenters, or NUMA CPU clusters that can be scheduled similarly).

    We're fully open source (https://github.com/google/iree) but have been operating quietly while we get the groundwork in place - it's taken some time but now we're finally starting to stumble into success on certain problem categories (like transformers as in the post). Right now it's mostly just organized as a systems/compiler nerd honeypot for people looking for an ML/number crunching framework that (purposefully) doesn't look like any of the existing ones :)

    Would love to chat more - even if just to commiserate over GPU APIs and such - everyone is welcome on the discord where a bunch of us nerds have gathered or we could grab virtual coffee (realized just now that this hn acct is ancient - I'm [email protected] :)

  • shark-samples

  • I updated the blog with the reference. Basically it crashes to compile the model with https://github.com/NodLabs/shark-samples/blob/main/examples/.... The coremltools converter is very version specific (like all vendor conversion kits) and still on a version of TF I couldn't get on conda. Also it doesn't allow for training and only FP16 for inference with ANE. All our tests were with FP32.

    //part of nod.ai/shark team.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts