shapash
trulens
Our great sponsors
shapash | trulens | |
---|---|---|
8 | 14 | |
2,642 | 1,612 | |
1.3% | 19.9% | |
8.6 | 9.8 | |
about 1 month ago | 2 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shapash
- GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
This A.I.-generated artwork, Théâtre D'opéra Spatial, won first place at an art competition, and the art community isn't happy about it
There's work being done in that regard (like this python module), but as far as I know it's very clearly statistical guesstimates, and though it "works", the mathematical foundations are still somewhat shaky. There are heuristics in there we can't get rid of for now. But it's still better than nothing. Waaaaaay better than nothing.
-
Hacker News top posts: Jun 14, 2022
Shapash – Python library to make machine learning interpretable\ (4 comments)
- Shapash – Python library to make machine learning interpretable
-
State of the Art data drift libraries on Python?
Try out eurybia, from the author of shapash which is a brilliant library as well.
-
[P] It Is Now Possible To Generate a Model Audit Report with Shapash
With the new version of Shapash that is now available, you can document each model you release into production. Within a few lines of code, you can include in an HTML report all the information about your model (and its associated performance), the data it uses, its learning strategy, … this report is designed to be easily shared with a Data Protection Officer, an internal audit department, a risk control department, a compliance department, or anyone who wants to understand his work.
- [D] Has anyone ever used the SHAP and LIME models in machine learning?
trulens
-
Why Vector Compression Matters
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, here’s ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
- FLaNK AI Weekly 18 March 2024
-
First 15 Open Source Advent projects
12. TruLens by TruEra | Github | tutorial
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
How are generative AI companies monitoring their systems in production?
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
- FLaNK Stack Weekly 28 August 2023
-
[P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
The team at TruEra recently released an open source project for evaluation & tracking of LLM applications called TruLens-Eval. We’ve specifically targeted retrieval-augmented QA as a core use case and so far we’ve seen it used for comparing different models and parameters, prompts, vector-db configurations and query planning strategies. I’d love to get your feedback on it.
- [D] Hardest thing about building with LLMs?
- Stop Evaluating LLMs on Vibes
- OSS library for attribution and interpretation methods for deep nets
What are some alternatives?
shap - A game theoretic approach to explain the output of any machine learning model.
langfuse - 🪢 Open source LLM engineering platform. Observability, metrics, evals, prompt management, testing, prompt playground, datasets, LLM evaluations -- 🍊YC W23 🤖 integrate via Typescript, Python / Decorators, OpenAI, Langchain, LlamaIndex, Litellm, Instructor, Mistral, Perplexity, Claude, Gemini, Vertex
interpret - Fit interpretable models. Explain blackbox machine learning.
probability - Probabilistic reasoning and statistical analysis in TensorFlow
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
GlassCode - This plugin allows you to make JetBrains IDEs to be fully transparent while keeping the code sharp and bright.
embedchain - Personalizing LLM Responses
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
machine_learning_basics - Plain python implementations of basic machine learning algorithms
eurybia - âš“ Eurybia monitors model drift over time and securizes model deployment with data validation
ML-Workspace - đź› All-in-one web-based IDE specialized for machine learning and data science.