Deepstream
private-gpt
Deepstream | private-gpt | |
---|---|---|
1 | 131 | |
84 | 52,027 | |
- | 2.9% | |
7.6 | 9.2 | |
5 months ago | 2 days ago | |
Jupyter Notebook | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Deepstream
private-gpt
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.
https://github.com/zylon-ai/private-gpt
-
PrivateGPT exploring the Documentation
Further details available at: https://docs.privategpt.dev/api-reference/api-reference/ingestion
- Show HN: I made an app to use local AI as daily driver
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Run https://github.com/imartinez/privateGPT
Then
make ingest /path/to/folder/with/files
Then chat to the LLM.
Done.
Docs: https://docs.privategpt.dev/overview/welcome/quickstart
-
Mozilla "MemoryCache" Local AI
PrivateGPT repository in case anyone's interested: https://github.com/imartinez/privateGPT . It doesn't seem to be linked from their official website.
-
What Is Retrieval-Augmented Generation a.k.a. RAG
I’m preparing a small internal tool for my work to search documents and provide answers (with references), I’m thinking of using GPT4All [0], Danswer [1] and/or privateGPT [2].
The RAG technique is very close to what I have in mind, but I don’t want the LLM to “hallucinate” and generate answers on its own by synthesizing the source documents. As stated by many others, we’re living in interesting times.
[0] https://gpt4all.io/index.html
[1] https://www.danswer.ai/
[2] https://github.com/imartinez/privateGPT
- LM Studio – Discover, download, and run local LLMs
-
Ask HN: Local LLM Recommendation?
https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_a...
https://github.com/imartinez/privateGPT
-
Run ChatGPT-like LLMs on your laptop in 3 lines of code
I've been playing around with https://github.com/imartinez/privateGPT and https://github.com/simonw/llm and wanted to create a simple Python package that made it easier to run ChatGPT-like LLMs on your own machine, use them with non-public data, and integrate them into practical applications.
This resulted in Python package I call OnPrem.LLM.
In the documentation, there are examples for how to use it for information extraction, text generation, retrieval-augmented generation (i.e., chatting with documents on your computer), and text-to-code generation: https://amaiya.github.io/onprem/
Enjoy!
What are some alternatives?
TensorRT-For-YOLO-Series - tensorrt for yolo series (YOLOv8, YOLOv7, YOLOv6, YOLOv5), nms plugin support
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
fastgron - High-performance JSON to GRON (greppable, flattened JSON) converter
gpt4all - gpt4all: run open-source LLMs anywhere
scikit-llm - Seamlessly integrate LLMs into scikit-learn.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
kafka-native - Kafka broker compiled to native using Quarkus and GraalVM.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Propan - Propan is a powerful and easy-to-use Python framework for building event-driven applications that interact with any MQ Broker
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AltStore - AltStore is an alternative app store for non-jailbroken iOS devices.
llama.cpp - LLM inference in C/C++