-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
-
local_llama
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
EmbedAI
An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks
Best implementation for you should be the one discussed here. It runs on GPU and gives output fast. https://github.com/PromtEngineer/localGPT The project page has instructions on how to install and run. If you want a easier install without fiddling with reqs, GPT4ALL is free, one click install and allows you to pass some kinds of documents. If I recall correctly it used to be text only. Advantage other than easy install is a decent selection of LLMs to load and use. https://gpt4all.io/index.html
I posted the speed of mine in the readme https://github.com/jlonge4/local_llama
In addition The UI of this is CL. https://github.com/SamurAIGPT/privateGPT has a webUI which is missing here. Also this supports more filetypes. Needs a few more iterations until all this is useful in practice and good enough open source models and GPU are supported...
Sounds like a great project and I really like the YouTube tutorials. I haven't been able to get it to work inside WSL. I tried another project here with UI and it works https://github.com/marella/chatdocs