Our great sponsors
-
langfuse
🪢 Open source LLM engineering platform. Observability, metrics, evals, prompt management, testing, prompt playground, datasets, LLM evaluations -- 🍊YC W23 🤖 integrate via Typescript, Python / Decorators, OpenAI, Langchain, LlamaIndex, Litellm, Instructor, Mistral, Perplexity, Claude, Gemini, Vertex
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
We struggled with this ourselves while building LLM-based products and then open-sourced our observability/monitoring tool [1]. Many use it to track RAG and agents in production, run custom evals on the production traces (focused on hallucination), and track how metrics are different across releases or customers. Feel free to dm if there is something specific you are looking to solve, happy to help.
[1] https://github.com/langfuse/langfuse
Here's the note I have on that: “For chatbot interfaces, emerging approach is to have another agent simulating the user (as opposed to a more classic approach based on token prediction probs on chat transcripts, what I think you're referencing). Then still use a model for grading. Only place I've seen this so far: https://github.com/Forethought-Technologies/AutoChain/blob/m... ” - AI Startup Founder