Need help finding local LLM

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • LLMsPracticalGuide

    A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)

  • checked e.g.: - https://medium.com/geekculture/list-of-open-sourced-fine-tuned-large-language-models-llm-8d95a2e0dc76 - https://github.com/Mooler0410/LLMsPracticalGuide - https://www.reddit.com/r/LocalLLaMA/comments/12r552r/creating_an_ai_agent_with_vicuna_7b_and_langchain/ - https://www.youtube.com/watch?v=9ISVjh8mdlA

  • text-generation-webui

    A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • What we're looking for: - running full local - no API-key requirements - no data exposing to external - GPU: available for testing is an GTX 1060 and an RTX 3080 - must run on Windows 10 and Windows 11 (compliance requirement) - for testing we would like to use https://github.com/oobabooga/text-generation-webui - should be able to read PDF (at best without pre processing), Markdown and HTML, .py, .go. Any other format we should be able to convert with pandoc. Currently in total ~30 GB - content: production process information (not only data, a lot of text), user manuals for people working in production and source code - long term goal (~ 6 month): using it with langchain (python not JS)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • localGPT

    Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

  • This might be worth looking at: https://github.com/PromtEngineer/localGPT

  • chatdocs

    Chat with your documents offline using AI.

  • and: https://github.com/marella/chatdocs

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Struggling with Local LLMs

    2 projects | /r/artificial | 4 Jul 2023
  • Local LLMs GPUs

    2 projects | /r/LocalLLaMA | 4 Jul 2023
  • Best commercially viable method to ask questions against a set of 30~ PDFs?

    3 projects | /r/LocalLLaMA | 28 Jun 2023
  • Document digest & oobabooga

    2 projects | /r/oobaboogazz | 27 Jun 2023
  • What is the best way to create a knowledge-base specific LLM chatbot ?

    4 projects | /r/LocalLLaMA | 26 Jun 2023