Run LLMs on my own Mac fast and efficient Only 2 MBs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • llama.cpp

    LLM inference in C/C++

  • I hate this kind of clickbait marketing suggesting the project is delivering 1/100 of the size or 100x-35000x the speed of other solutions because it uses a different language for a wrapper around core library and completely neglecting tooling and community expertise built around other solutions.

    First of all the project is based on llama.cpp[1], which does the heavy work of loading and running multi-GB model files on GPU/CPU and the inference speed is not limited by the wrapper choice (there are other wrappers in Go, Python, Node, Rust, etc. or one can use llama.cpp directly). The size of the binary is also not that important when common quantized model files are often in the range of 5GB-40GB and require a beefy GPU or a MB with 16-64GB of RAM.

    [1] https://github.com/ggerganov/llama.cpp

  • whisper-turbo

    Cross-Platform, GPU Accelerated Whisper 🏎️

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • wasi-nn

    Neural Network proposal for WASI

  • Mmm…

    The wasm-nn that this relies on (https://github.com/WebAssembly/wasi-nn) is a proposal that relies of arbitrary plugin backends sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.

    …and that is totally non portable.

    The reason this works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted here: https://github.com/WasmEdge/WasmEdge/tree/master/plugins/was...

    So..

    > Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.

    Is total rubbish; no, you can’t.

    This isn’t portable.

    It’s not sandboxed.

    If you have a wasm binary you might be able to run it if the version of the runtime you’re using happens to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.

    There’s a lot of “so portable” talk in this article which really seems misplaced.

  • SSVM

    WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.

  • Mmm…

    The wasm-nn that this relies on (https://github.com/WebAssembly/wasi-nn) is a proposal that relies of arbitrary plugin backends sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.

    …and that is totally non portable.

    The reason this works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted here: https://github.com/WasmEdge/WasmEdge/tree/master/plugins/was...

    So..

    > Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.

    Is total rubbish; no, you can’t.

    This isn’t portable.

    It’s not sandboxed.

    If you have a wasm binary you might be able to run it if the version of the runtime you’re using happens to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.

    There’s a lot of “so portable” talk in this article which really seems misplaced.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts