Show HN: IncarnaMind-Chat with your multiple docs using LLMs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • IncarnaMind

    Connect and chat with your multiple documents (pdf and txt) through GPT 3.5, GPT-4 Turbo, Claude and Local Open-Source LLMs

  • trieve

    All-in-one infrastructure for building search, recommendations, and RAG. Trieve combines search language models with tools for tuning ranking and relevance.

  • Can we talk about how dynamic chunking works by any chance? That is the most interesting piece imo.

    We have a similar thing (w/ UIs for search/chat) at https://github.com/arguflow/arguflow .

    - [email protected]

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • LLMStack

    No-code platform to build LLM Agents, workflows and applications with your data

  • We built https://github.com/trypromptly/LLMStack to serve exactly this persona. A low-code platform to quickly build RAG pipelines and other LLM applications.

  • localGPT

    Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

  • I think local LLMs are great for tinkerers, and with quantization can run on most modern PCs. I am not comfortable sending over my personal data over to OpenAI/Anthropic, so I've been playing around with https://github.com/PromtEngineer/localGPT/, GPT4All, etc. which keep the data all local.

    Sliding window chunking, RAG, etc. seem more sophisticated than the other document LLM tools, so I would love to try this out if you ever add the ability to run LLMs locally!

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • We built a self-hosted low-code platform to build LLM apps locally and open-sourced it

    1 project | /r/OpenAI | 3 Sep 2023
  • LLMStack: self-hosted low-code platform to build LLM apps locally with LocalAI support

    1 project | /r/selfhosted | 3 Sep 2023
  • LLMStack: a self-hosted low-code platform to build LLM apps locally

    1 project | /r/programming | 1 Sep 2023
  • Teaching with AI

    2 projects | news.ycombinator.com | 31 Aug 2023
  • Show HN: LLMStack – Self-Hosted, Low-Code Platform to Build AI Experiences

    1 project | news.ycombinator.com | 31 Aug 2023