-
IncarnaMind
Connect and chat with your multiple documents (pdf and txt) through GPT 3.5, GPT-4 Turbo, Claude and Local Open-Source LLMs
-
trieve
All-in-one infrastructure for building search, recommendations, and RAG. Trieve combines search language models with tools for tuning ranking and relevance.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Can we talk about how dynamic chunking works by any chance? That is the most interesting piece imo.
We have a similar thing (w/ UIs for search/chat) at https://github.com/arguflow/arguflow .
We built https://github.com/trypromptly/LLMStack to serve exactly this persona. A low-code platform to quickly build RAG pipelines and other LLM applications.
I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.
Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!
Related posts
-
We built a self-hosted low-code platform to build LLM apps locally and open-sourced it
-
LLMStack: self-hosted low-code platform to build LLM apps locally with LocalAI support
-
LLMStack: a self-hosted low-code platform to build LLM apps locally
-
Teaching with AI
-
Show HN: LLMStack – Self-Hosted, Low-Code Platform to Build AI Experiences