trulens
n8n
trulens | n8n | |
---|---|---|
14 | 298 | |
1,629 | 40,874 | |
7.9% | 2.4% | |
9.8 | 10.0 | |
4 days ago | 4 days ago | |
Jupyter Notebook | TypeScript | |
MIT License | Apache 2.0 with Commons Clause |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trulens
-
Why Vector Compression Matters
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, here’s ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
- FLaNK AI Weekly 18 March 2024
-
First 15 Open Source Advent projects
12. TruLens by TruEra | Github | tutorial
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
How are generative AI companies monitoring their systems in production?
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
- FLaNK Stack Weekly 28 August 2023
-
[P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
The team at TruEra recently released an open source project for evaluation & tracking of LLM applications called TruLens-Eval. We’ve specifically targeted retrieval-augmented QA as a core use case and so far we’ve seen it used for comparing different models and parameters, prompts, vector-db configurations and query planning strategies. I’d love to get your feedback on it.
- [D] Hardest thing about building with LLMs?
- Stop Evaluating LLMs on Vibes
- OSS library for attribution and interpretation methods for deep nets
n8n
-
Ask HN: Is there a visual data mapper for JSON transformation?
I believe you can achieve that with n8n. Used in past (and still running) for some data transformation and little more. Possibly similar case what are you describing.
https://n8n.io/
- Dify, a visual workflow to build/test LLM applications
-
Helm 101: Creating Helm Charts
A startup, "DevOps Solutions" adopts Helm to streamline their Kubernetes deployments. You're a consultant tasked with creating a basic Helm Chart for n8n. It should be customizable for different environments using values.
- IFTTT is killing its pay-what-you-want Legacy Pro plan
-
A Year of Self-Hosting: 6 Open-Source Projects That Surprised Me in 2023
n8n.io - a powerful workflow automation tool
-
Open Source alternatives to tools you Pay for
N8N - Open Source Alternative to Zapier
- Ask YC: tracking events platform and no-code workflow
-
Your privacy is optional
N8N - anything that I would have used Zapier or IFTTT for I now use N8N. It is a bit harder to use but more powerful.
-
To whoever uses Supabase as their backend: what's your full no-code / low-code stack?
I'm using Weweb as my front end and Supabase as my back end. I'm also looking into n8n.io to run some of the backend logic that I'm either unsure how to code myself within Supabase or unsure if Supabase can perform those back-end tasks and workflows. Curious what stack or tools other Supabase users are using?
-
Show HN: Keep – GitHub Actions for your monitoring tools
This is similar to something I saw before: https://n8n.io
What are some alternatives?
langfuse - 🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
Node RED - Low-code programming for event-driven applications
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Huginn - Create agents that monitor and act on your behalf. Your agents are standing by!
probability - Probabilistic reasoning and statistical analysis in TensorFlow
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
StackStorm - StackStorm (aka "IFTTT for Ops") is event-driven automation for auto-remediation, incident responses, troubleshooting, deployments, and more for DevOps and SREs. Includes rules engine, workflow, 160 integration packs with 6000+ actions (see https://exchange.stackstorm.org) and ChatOps. Installer at https://docs.stackstorm.com/install/index.html
embedchain - Personalizing LLM Responses
budibase - Budibase is an open-source low code platform that helps you build internal tools in minutes 🚀
machine_learning_basics - Plain python implementations of basic machine learning algorithms
Home Assistant - :house_with_garden: Open source home automation that puts local control and privacy first.