Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free. Learn more →
Private-gpt Alternatives
Similar projects and alternatives to private-gpt
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
Nutrient
Nutrient - The #1 PDF SDK Library. Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
-
-
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
-
-
-
LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
guidance
Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
vault-ai
OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
-
-
SillyTavern
Discontinued LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern] (by Cohee1207)
-
-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
-
-
AGiXT
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
-
-
-
local_llama
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
private-gpt discussion
private-gpt reviews and mentions
-
Universal Personal Assistant with LLMs
Specialized projects that facilitate automatic document indexing and LLM invocation with the document content are gaining traction, for example PrivateGPT, QAnything, and LazyLLM. Another novelty is the integration of LLMs into applications and tools: The Semantic Kernel project aims to integrate LLM invocation during programming and inside the code itself.
-
Ask HN: How would I make an AI model that is trained on 2000 text documents?
Perhaps try privateGPT
https://github.com/zylon-ai/private-gpt
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.
https://github.com/zylon-ai/private-gpt
-
PrivateGPT exploring the Documentation
Further details available at: https://docs.privategpt.dev/api-reference/api-reference/ingestion
- Show HN: I made an app to use local AI as daily driver
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Run https://github.com/imartinez/privateGPT
Then
make ingest /path/to/folder/with/files
Then chat to the LLM.
Done.
Docs: https://docs.privategpt.dev/overview/welcome/quickstart
-
Mozilla "MemoryCache" Local AI
PrivateGPT repository in case anyone's interested: https://github.com/imartinez/privateGPT . It doesn't seem to be linked from their official website.
-
What Is Retrieval-Augmented Generation a.k.a. RAG
I’m preparing a small internal tool for my work to search documents and provide answers (with references), I’m thinking of using GPT4All [0], Danswer [1] and/or privateGPT [2].
The RAG technique is very close to what I have in mind, but I don’t want the LLM to “hallucinate” and generate answers on its own by synthesizing the source documents. As stated by many others, we’re living in interesting times.
[0] https://gpt4all.io/index.html
[1] https://www.danswer.ai/
[2] https://github.com/imartinez/privateGPT
- LM Studio – Discover, download, and run local LLMs
-
A note from our sponsor - Nutrient
nutrient.io | 17 Feb 2025
Stats
zylon-ai/private-gpt is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of private-gpt is Python.