langchaingo
langroid
langchaingo | langroid | |
---|---|---|
9 | 15 | |
3,195 | 1,594 | |
- | 16.2% | |
9.8 | 9.8 | |
2 days ago | 7 days ago | |
Go | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langchaingo
-
How to use Retrieval Augmented Generation (RAG) for Go applications
Generative AI development has been democratised, thanks to powerful Machine Learning models (specifically Large Language Models such as Claude, Meta's LLama 2, etc.) being exposed by managed platforms/services as API calls. This frees developers from the infrastructure concerns and lets them focus on the core business problems. This also means that developers are free to use the programming language best suited for their solution. Python has typically been the go-to language when it comes to AI/ML solutions, but there is more flexibility in this area. In this post you will see how to leverage the Go programming language to use Vector Databases and techniques such as Retrieval Augmented Generation (RAG) with langchaingo. If you are a Go developer who wants to how to build learn generative AI applications, you are in the right place!
-
Build a Serverless GenAI solution with Lambda, DynamoDB, LangChain and Amazon Bedrock
This use-case here is a similar one - a chat application. I will switch back to implementing things in Go using langchaingo (I used Python for the previous one) and continue to use Amazon Bedrock. But there are few unique things you can explore in this blog post:
- LangChain for Go, the easiest way to write LLM-based programs in Go
- Langchaingo – LangChain in Idiomatic Go
- Agency: Pure Go LangChain Alternative
-
Building LangChain applications with Amazon Bedrock and Go - An introduction
langchaingo is the LangChain implementation for the Go programming language. This blog post covers how to extend langchaingo to use foundation model from Amazon Bedrock.
-
Zep: A long-term memory store for LLM apps, written in Go
Langchain Go is being actively developed https://github.com/tmc/langchaingo
langroid
-
OpenAI: Streaming is now available in the Assistants API
This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.
At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:
https://community.openai.com/t/add-custom-roles-to-messages-...
[1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:
https://github.com/langroid/langroid/blob/main/langroid/agen...
- FLaNK Stack 29 Jan 2024
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Pushing ChatGPT's Structured Data Support to Its Limits
we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:
https://github.com/langroid/langroid/blob/main/langroid/agen...
We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.
An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.
In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid
-
Build a search engine, not a vector DB
This resonates with the approach we’ve taken in Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers): our DocChatAgent uses a combination of lexical and semantic retrieval, reranking and relevance extraction to improve precision and recall:
https://github.com/langroid/langroid/blob/main/langroid/agen...
-
HuggingChat – ChatGPT alternative with open source models
In the Langroid library (a multi-agent framework from ex-CMU/UW-Madison researchers) we have these and more. For example here’s a script that combines web search and RAG:
https://github.com/langroid/langroid/blob/main/examples/docq...
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Thanks, also found Langdroid: https://github.com/langroid/langroid/blob/main/README.md
- memory in ConversationalRetrievalChain removed
- [D] github repositories for ai web search agents
What are some alternatives?
yao - :rocket: A performance app engine to create web services and applications in minutes.Suitable for AI, IoT, Industrial Internet, Connected Vehicles, DevOps, Energy, Finance and many other use-cases.
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
langchain - 🦜🔗 Build context-aware reasoning applications
modelfusion - The TypeScript library for building AI applications.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
zep - Zep: Long-Term Memory for AI Assistants.
vectordb - A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.
TaskEaseGPT - (WIP) A user-friendly, AI-powered task manager emphasizing efficient work over planning. Streamlines workflow with intelligent task generation & execution. Boost your productivity today!
Adala - Adala: Autonomous DAta (Labeling) Agent framework
langchaingo-amazon-bedrock-llm - Amazon Bedrock extension for langchaingo
chidori - A reactive runtime for building durable AI agents