Show HN: Speeding up LLM inference 2x times (possibly)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • effort

    An implementation of bucketMul LLM inference

  • I think it was somewhere around that tag:

    https://github.com/kolinko/effort/releases/tag/5.0-last-mixt...

    Cannot rerun easily any more, because the underlying model/weight names changed in the meantime. It doesn't help that Mixtral's published .safetensor files seem messed up, and I needed to hack a conversion from pytorch - it added an extra layer of confusion into the project.

  • cria

    Tiny inference-only implementation of LLaMA (by recmo)

  • It originally started as a fork to Recmo’s cria pure numpy llama impl :)

    https://github.com/recmo/cria

    Took a whole night to compute a few

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts