M2 Ultra can run 128 streams of Llama 2 7B in parallel

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • llama.cpp

    LLM inference in C/C++

  • whisper.coreml

    Robust Speech Recognition via Large-Scale Weak Supervision

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • more-ane-transformers

    Run transformers (incl. LLMs) on the Apple Neural Engine.

  • ollama

    Get up and running with Llama 3, Mistral, Gemma, and other large language models.

  • They are very much uncensored, give it a try yourself: `ollama run llama2-uncensored` [0]

    It will be happy to curse, talk about religions, help you cook illicit substances or do all sorts of other stuff.

    [0] https://ollama.ai/

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Is it possible to use ANE(Apple Neural Engine) to run those models?

    1 project | /r/LocalLLaMA | 7 May 2023
  • Anthropic’s $5B, 4-year plan to take on OpenAI

    6 projects | news.ycombinator.com | 11 Apr 2023
  • Ask HN: How do you name software?

    1 project | news.ycombinator.com | 10 Feb 2024
  • Bard is getting better at logic and reasoning

    1 project | news.ycombinator.com | 7 Jun 2023
  • Apple is adding more and more neural engine cores to their products, is there any way to use them for local LLMs?

    2 projects | /r/LocalLLaMA | 7 Jun 2023