LLMs up to 4x Faster With latest Nvidia drivers on Windows

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • code-llama-for-vscode

    Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.

  • Do you use https://github.com/xNul/code-llama-for-vscode or something else?

    Haven’t found any good setup instructions for Linux or my Google skills are failing me.

  • text-generation-webui

    A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • llama.cpp

    LLM inference in C/C++

  • I dont understand, are you claiming non-Apple devices cannot run LLMs?

    https://github.com/ggerganov/llama.cpp/issues/34

    If you meant eGPU support, IIRC that is beta for everyone right now.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts