Apple Silicon Llama 7B running in docker?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • nitro

    An inference server on top of llama.cpp. OpenAI-compatible API, queue, & scaling. Embed a prod-ready, local inference engine in your apps. Powers Jan (by janhq)

  • bionic-gpt

    BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality

  • I'm the maintainer of https://github.com/bionic-gpt/bionic-gpt and we have a nice install option for windows and Linux but nothing for Apple Silicon.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • llama.cpp

    LLM inference in C/C++

  • ollama

    Get up and running with Llama 3, Mistral, Gemma, and other large language models.

  • How important is using docker? If the main concern is ease of installation then look into Ollama, as the installation process is the same as any other Mac app (download and drag .app to Applications). It has options to interact through terminal or a browser.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts