Best commercially viable method to ask questions against a set of 30~ PDFs?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • chatdocs

    Chat with your documents offline using AI.

  • See here: https://github.com/marella/chatdocs#configuration (chatdocs.yml file, context_length)

  • localGPT

    Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

  • You can try localGPT. It's a fork of privateGPT which uses HF models instead of llama.cpp. It uses TheBloke/vicuna-7B-1.1-HF which is not commercially viable but you can quite easily change the code to use something like mosaicml/mpt-7b-instruct or even mosaicml/mpt-30b-instruct which fit the bill.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Struggling with Local LLMs

    2 projects | /r/artificial | 4 Jul 2023
  • Local LLMs GPUs

    2 projects | /r/LocalLLaMA | 4 Jul 2023
  • Document digest & oobabooga

    2 projects | /r/oobaboogazz | 27 Jun 2023
  • What is the best way to create a knowledge-base specific LLM chatbot ?

    4 projects | /r/LocalLLaMA | 26 Jun 2023
  • Need help finding local LLM

    4 projects | /r/LocalLLaMA | 22 Jun 2023