Our great sponsors
-
haystack
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
-
hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Haystack for production. We cannot afford breaking changes in our production apps. Its stable, documentation is excellent and did I mention its' STABLE!??
You should also check us out (https://vectara.com) - we provide RAG as a service so you don't have to do all the heavy lifting and putting together the pieces yourself.
You can look into Langroid, the multi-agent LLM framework from ex-CMU and UW Madison researchers: https://github.com/langroid/langroid. We take a measured approach, avoid unnecessary code bloat/abstractions, clean and stable code (apps written 4 months ago still work).
Has anyone tried https://github.com/TengHu/ActionWeaver? it's a framework built around function calling.
If you are looking to develop QnA or chat based apps then check out https://dify.ai. Do a quick check and see if it fit your requirements. You can integrate it with your app using the apis it provides
Related posts
- Insights from Finetuning LLMs for Classification Tasks
- Ask HN: How does deploying a fine-tuned model work
- Dify, an end-to-end, visualized workflow to build/test LLM applications
- Show HN: Add AI code interpreter to any LLM via SDK
- A suite of tools designed to streamline the development cycle of LLM-based apps