recon-ng
llmware
recon-ng | llmware | |
---|---|---|
4 | 9 | |
3,456 | 3,839 | |
- | 22.9% | |
0.0 | 9.8 | |
almost 2 years ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
recon-ng
- FLaNK Stack Weekly 19 Feb 2024
- lanmaster53/recon-ng: Open Source Intelligence gathering tool aimed at reducing the time spent harvesting information from open sources.
-
Google User investigation [help requested, and a tool I found useful]
git clone https://github.com/lanmaster53/recon-ng.git
-
Can someone help me install a source package on ubuntu
Why are you executing make configure? Where does it say to do that? Looking at the instructions it doesn't seem to tell you to do that anywhere.
llmware
-
More Agents Is All You Need: LLMs performance scales with the number of agents
I couldn't agree more. You should check out LLMWare's SLIM agents (https://github.com/llmware-ai/llmware/tree/main/examples/SLI...). It's focusing on pretty much exactly this and chaining multiple local LLMs together.
A really good topic that ties in with this is the need for deterministic sampling (I may have the terminology a bit incorrect) depending on what the model is indended for. The LLMWare team did a good 2 part video on this here as well (https://www.youtube.com/watch?v=7oMTGhSKuNY)
I think dedicated miniture LLMs are the way forward.
Disclaimer - Not affiliated with them in any way, just think it's a really cool project.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: LLMWare – Small Specialized Function Calling 1B LLMs for Multi-Step RAG
I've been building upon the LLMWare project - https://github.com/llmware-ai/llmware - for the past 3 months. The ability to run these models locally on standard consumer CPUs, along with the abstraction provided to chop and change between models and different processes is really cool.
I think these SLIM models are the start of something powerful for automating internal business processes and enhancing the use case of LLMs. Still kinda blows my mind that this is all running on my 3900X and also runs on a bog standard Hetzner server with no GPU.
- Show HN: LLMWare – Integrated Solution for RAG in Finance and Legal
- Llmware.ai – AI Tools for Financial, Legal and Compliance
-
Open Source Advent Fun Wraps Up!
16. LLMWare by Ai Bloks | Github | tutorial
- FLaNK Stack Weekly 16 October 2023
- Strategy for PDF data extraction and Display
What are some alternatives?
reor - Private & local AI personal knowledge management app.
llm-client-sdk - SDK for using LLM
gnn - TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
fastembed - Fast, Accurate, Lightweight Python library to make State of the Art Embedding
inference - A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓
megabots - 🤖 State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
SimplyRetrieve - Lightweight chat AI platform featuring custom knowledge, open-source LLMs, prompt-engineering, retrieval analysis. Highly customizable. For Retrieval-Centric & Retrieval-Augmented Generation.
obsidian-copilot - 🤖 A prototype assistant for writing and thinking
Wails - Create beautiful applications using Go
vectorflow - VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of your choice.