I have tried various different methods to install, and none work. Can you spoon-feed me how?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • text-generation-webui

    A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • git clone https://github.com/oobabooga/text-generation-webui

  • GPTQ-for-LLaMa

    4 bits quantization of LLaMa using GPTQ (by oobabooga)

  • git clone https://github.com/oobabooga/GPTQ-for-LLaMa

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • mlc-llm

    Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

  • Have You tried https://mlc.ai/mlc-llm/ ? It uses Vulkan instead of CUDA, so running it is much easier, but only models compiled for MLC will work with it.

  • koboldcpp

    A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts