PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • flash-attention

    Fast and memory-efficient exact attention - Windows wheels (by jllllll)

  • You can install pre-build wheel of Flash-attention. It seems to work fine-ish for me on Windows https://github.com/jllllll/flash-attention/releases

  • flash-attention

    Fast and memory-efficient exact attention

  • Doesn't seem so https://github.com/Dao-AILab/flash-attention/issues/542 No updates for a while.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts