langroid
txtai
langroid | txtai | |
---|---|---|
15 | 356 | |
1,698 | 7,111 | |
21.4% | 4.2% | |
9.8 | 9.3 | |
1 day ago | 3 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langroid
-
OpenAI: Streaming is now available in the Assistants API
This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.
At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:
https://community.openai.com/t/add-custom-roles-to-messages-...
[1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:
https://github.com/langroid/langroid/blob/main/langroid/agen...
- FLaNK Stack 29 Jan 2024
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Pushing ChatGPT's Structured Data Support to Its Limits
we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:
https://github.com/langroid/langroid/blob/main/langroid/agen...
We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.
An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.
In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid
-
Build a search engine, not a vector DB
This resonates with the approach we’ve taken in Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers): our DocChatAgent uses a combination of lexical and semantic retrieval, reranking and relevance extraction to improve precision and recall:
https://github.com/langroid/langroid/blob/main/langroid/agen...
-
HuggingChat – ChatGPT alternative with open source models
In the Langroid library (a multi-agent framework from ex-CMU/UW-Madison researchers) we have these and more. For example here’s a script that combines web search and RAG:
https://github.com/langroid/langroid/blob/main/examples/docq...
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Thanks, also found Langdroid: https://github.com/langroid/langroid/blob/main/README.md
- memory in ConversationalRetrievalChain removed
- [D] github repositories for ai web search agents
txtai
- Show HN: FileKitty – Combine and label text files for LLM prompt contexts
-
What contributing to Open-source is, and what it isn't
I tend to agree with this sentiment. Many junior devs and/or those in college want to contribute. Then they feel entitled to merge a PR that they worked hard on often without guidance. I'm all for working with people but projects have standards and not all ideas make sense. In many cases, especially with commercial open source, the project is the base of a companies identity. So it's not just for drive-by ideas to pad a resume or finish a school project.
For those who do want to do this, I'd recommend writing an issue and/or reaching out to the developers to engage in a dialogue. This takes work but it will increase the likelihood of a PR being merged.
Disclaimer: I'm the primary developer of txtai (https://github.com/neuml/txtai), an open-source vector database + RAG framework
-
Build knowledge graphs with LLM-driven entity extraction
txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
-
Bootstrap or VC?
Bootstrapping only works if you have the runway to do it and you don't feel the need to grow fast.
With NeuML (https://neuml.com), I've went the bootstrapping route. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. It's a "live within your means" strategy.
VC funding can have a snowball effect where you need more and more. Then you're in the loop of needing funding rounds to survive. The hope is someday you're acquired or start turning a profit.
I would say both have their pros and cons. Not all ideas have the luxury of time.
- txtai: An embeddings database for semantic search, graph networks and RAG
-
Ask HN: What happened to startups, why is everything so polished?
I agree that in many cases people are puffing their feathers to try to be something they're not (at least not yet). Some believe in the fake it until you make it mentality.
With NeuML (https://neuml.com), the website is a simple HTML page. On social media, I'm honest about what NeuML is, that I'm in my 40s with a family and not striving to be the next Steve Jobs. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. For me, authenticity and being genuine is most important. I would say that being genuine has been way more of an asset than liability.
-
Are we at peak vector database?
I'll add txtai (https://github.com/neuml/txtai) to the list.
There is still plenty of room for innovation in this space. Just need to focus on the right projects that are innovating and not the ones (re)working on problems solved in 2020/2021.
- Txtai: An all-in-one embeddings database for semantic search and LLM workflows
-
Generate knowledge with Semantic Graphs and RAG
txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
-
Show HN: Open-source Rule-based PDF parser for RAG
Nice project! I've long used Tika for document parsing given it's maturity and wide number of formats supported. The XHTML output helps with chunking documents for RAG.
Here's a couple examples:
- https://neuml.hashnode.dev/build-rag-pipelines-with-txtai
- https://neuml.hashnode.dev/extract-text-from-documents
Disclaimer: I'm the primary author of txtai (https://github.com/neuml/txtai).
What are some alternatives?
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
modelfusion - The TypeScript library for building AI applications.
tika-python - Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
vectordb - A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.
faiss - A library for efficient similarity search and clustering of dense vectors.
Adala - Adala: Autonomous DAta (Labeling) Agent framework
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
chidori - A reactive runtime for building durable AI agents
paperai - 📄 🤖 Semantic search and workflows for medical/scientific papers