primeqa
llmware
primeqa | llmware | |
---|---|---|
5 | 9 | |
702 | 3,173 | |
0.4% | 6.7% | |
8.2 | 9.8 | |
1 day ago | 1 day ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
primeqa
- State-of-the-Art Multilingual Question Answering
-
ML tool to read PDF file and answer questions from its content
Check out this project it might be of some help primeqa .
-
Natural language, chat-based, AI-assisted search for Gmail
Look into primeqa (github/primeqa. With some basic python programming you can do alot of things!!
- PrimeQA
-
With Just ~20 Lines of Python Code, You can Do ‘Retrieval Augmented GPT Based QA’ Using This Open Source Repository Called PrimeQA
Quick Read: https://www.marktechpost.com/2023/03/03/with-just-20-lines-of-python-code-you-can-do-retrieval-augmented-gpt-based-qa-using-this-open-source-repository-called-primeqa/ Paper: https://arxiv.org/pdf/2301.09715.pdf Github: https://github.com/primeqa/primeqa
llmware
-
More Agents Is All You Need: LLMs performance scales with the number of agents
I couldn't agree more. You should check out LLMWare's SLIM agents (https://github.com/llmware-ai/llmware/tree/main/examples/SLI...). It's focusing on pretty much exactly this and chaining multiple local LLMs together.
A really good topic that ties in with this is the need for deterministic sampling (I may have the terminology a bit incorrect) depending on what the model is indended for. The LLMWare team did a good 2 part video on this here as well (https://www.youtube.com/watch?v=7oMTGhSKuNY)
I think dedicated miniture LLMs are the way forward.
Disclaimer - Not affiliated with them in any way, just think it's a really cool project.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: LLMWare – Small Specialized Function Calling 1B LLMs for Multi-Step RAG
I've been building upon the LLMWare project - https://github.com/llmware-ai/llmware - for the past 3 months. The ability to run these models locally on standard consumer CPUs, along with the abstraction provided to chop and change between models and different processes is really cool.
I think these SLIM models are the start of something powerful for automating internal business processes and enhancing the use case of LLMs. Still kinda blows my mind that this is all running on my 3900X and also runs on a bog standard Hetzner server with no GPU.
- Show HN: LLMWare – Integrated Solution for RAG in Finance and Legal
- Llmware.ai – AI Tools for Financial, Legal and Compliance
-
Open Source Advent Fun Wraps Up!
16. LLMWare by Ai Bloks | Github | tutorial
- FLaNK Stack Weekly 16 October 2023
- Strategy for PDF data extraction and Display
What are some alternatives?
question_extractor - Generate question/answer training pairs out of raw text.
llm-client-sdk - SDK for using LLM
cherche - Neural Search
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
google-local-results-ai-server - A server code for serving BERT-based models for text classification. It is designed by SerpApi for heavy-load prototyping and production tasks, specifically for the implementation of the google-local-results-ai-parser gem.
inference - A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
extreme-bert - ExtremeBERT is a toolkit that accelerates the pretraining of customized language models on customized datasets, described in the paper “ExtremeBERT: A Toolkit for Accelerating Pretraining of Customized BERT”.
openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓
SquadCalc - A Minimalist Squad Mortar Calculator
SimplyRetrieve - Lightweight chat AI platform featuring custom knowledge, open-source LLMs, prompt-engineering, retrieval analysis. Highly customizable. For Retrieval-Centric & Retrieval-Augmented Generation.
MAX-Toxic-Comment-Classifier - Detect 6 types of toxicity in user comments.
Wails - Create beautiful applications using Go