Our great sponsors
-
haystack
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
haltt4llm
This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious current problem in widespread adoption of LLM's for many real purposes.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Haystack Agents are designed in a way so that you can easily use them with different LLM providers. You just need to implement one standardized wrapper class for your modelprovider of choice (https://github.com/deepset-ai/haystack/blob/7c5f9313ff5eedf2...)
So back to your question: We will enable both ways in Haystack: 1) Loading a local model directly via Haystack AND 2) quering self-hosted models via REST (e.g. Huggingface running on AWS SageMaker). Our philosophy here: The model provider should be independent from your application logic and easy to switch.
In the current version, we support for local models only option 1. This works for many of the provided models provided by HuggingFace, e.g. flan-t5. We are already working on adding support for more open-source models (e.g. alpaca) as models like Flan-T5 don't perform great when used in Agents. The support for sagemaker endpoints is also on our list. Any options you'd like to see here?
FTR, people are trying to build systems to compare LLMs with each other based on how well they are at saying "I don't know" (of course knowing is still rewarded higher): https://github.com/manyoso/haltt4llm
includes a TypeScript interface that GPT4 is asked to follow. Works very well and reliably uses tools like Calculate, RequestDOM, etc.
https://github.com/cantino/browser-friend
That's a great news! Especially since we implemented the integration between Haystack and Qdrant: https://github.com/qdrant/qdrant-haystack/
Related posts
- Is there a UI that can limit LLM tokens to a preset list?
- Chatgpt.js: An open-source powerful client-side JavaScript library for ChatGPT
- Chatgpt.js: An open-source, powerful client-side JavaScript library for ChatGPT
-
chatgpt.js VS bravegpt - a user suggested alternative
2 projects | 5 Jul 2023
- Community survey: Do you use LangChain, LMQL or something else?