How to overcome the issues of the limit of ~4,000 tokens per input, when dealing with documents summarization?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • llama_farm

    Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.

  • I do i recursively https://github.com/atisharma/llama_farm/blob/main/llama_farm/summaries.hy

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Show HN: A Flask-Based Internet Radio Player Built in Hylang

    1 project | news.ycombinator.com | 16 Feb 2024
  • Demoscene and Video Game Music Streaming Radio Links

    1 project | news.ycombinator.com | 10 Feb 2024
  • Langchain Youtube Summarizer with Oooba api Custom LLM wrapper (and kobold)

    1 project | /r/oobaboogazz | 9 Jul 2023
  • Request for comment / contribution - local AI tool (Hy)

    1 project | /r/lisp | 20 Jun 2023
  • balacoon_tts: Fastest neural TTS on Raspberry

    1 project | /r/raspberry_pi | 16 Jun 2023