Local GPT (completely offline and no OpenAI!)

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • local_llama

    This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.

  • LocalAI

    :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

  • It's also worth checking out https://github.com/go-skynet/LocalAI it's a local LLM runner that has an OpenAI compatible API. I've gotten several apps now working against it that would otherwise require the paid OpenAI access. It was a bit weird to get it working with my GPU (it uses llama.cpp and it's cublas implementation) but once I did then it's been working pretty well.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • private-gpt

    Interact with your documents using the power of GPT, 100% privately, no data leaks

  • Just spent the morning setting up imartinez/privateGPT: Interact privately with your documents using the power of GPT, 100% privately, no data leaks (github.com) on my machine, its pretty good but desperately needs GPU support (which is coming)

  • llama.cpp

    LLM inference in C/C++

  • Yeah someone mentioned that on my other post about my project, looks like I’m a day late and a dollar short, you can use GPU with mine by following the instructions here if you care to get into it

  • h2ogpt

    Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/

  • This has GPU support and does same as privateGPT. https://github.com/h2oai/h2ogpt

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • AgentCloud vs Google Cloud Agents

    1 project | dev.to | 29 Apr 2024
  • Insights from Finetuning LLMs for Classification Tasks

    1 project | news.ycombinator.com | 28 Apr 2024
  • A suite of tools designed to streamline the development cycle of LLM-based apps

    1 project | news.ycombinator.com | 12 Apr 2024
  • Agent Cloud VS OpenAI

    1 project | dev.to | 11 Apr 2024
  • Agent Cloud vs CrewAI

    1 project | dev.to | 5 Apr 2024