arroyo
trulens
arroyo | trulens | |
---|---|---|
13 | 14 | |
3,326 | 1,646 | |
3.2% | 8.9% | |
9.6 | 9.8 | |
6 days ago | 7 days ago | |
Rust | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arroyo
- FLaNK AI Weekly 18 March 2024
- Arryo 0.8 released — streaming SQL engine
-
Query Engines: Push vs. Pull
Interesting - I looked into your code a bit. I found your window aggregation library [1]. You may be interested in looking into the Rust implementation of some of the research work I've been a part of [2].
In Flink, I believe the reason they need to implement their own backpressure system is that they multiplex TCP connections. That is, they have multiple logical streams flowing through a single TCP connection. If that's the case, you need to do some work to 1) detect which logical stream is the one that's blocking, and 2) don't block because other logical streams may be able to use the active TCP connection.
Thinking it through, I think what Flink's approach buys is not necessarily better performance, but better just a manageable number of connections. That is, imagine you have a process P1 with operators A, B and C. And then P2 has D, E, F. Now imagine that this is a shuffle, where A, B and C are fully connected to D, E and F. In my old system, you would have 9 TCP connections. In Flink, you will have 1.
[1] https://github.com/ArroyoSystems/arroyo/blob/master/arroyo-w...
- Arroyo
- Show HN: Arroyo – Write SQL on streaming data
- Release v0.3.0 · ArroyoSystems/arroyo - Stream Processing Engine
- Arroyo 0.2 released - Rust stream processing engine, now on Kubernetes
- Distributed stream processing engine written in Rust
- ArroyoSystems/arroyo: Arroyo is a distributed stream processing engine written in Rust
- Arroyo, a new open-source SQL stream processing engine written in Rust
trulens
-
Why Vector Compression Matters
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, here’s ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
- FLaNK AI Weekly 18 March 2024
-
First 15 Open Source Advent projects
12. TruLens by TruEra | Github | tutorial
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
How are generative AI companies monitoring their systems in production?
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
- FLaNK Stack Weekly 28 August 2023
-
[P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
The team at TruEra recently released an open source project for evaluation & tracking of LLM applications called TruLens-Eval. We’ve specifically targeted retrieval-augmented QA as a core use case and so far we’ve seen it used for comparing different models and parameters, prompts, vector-db configurations and query planning strategies. I’d love to get your feedback on it.
- [D] Hardest thing about building with LLMs?
- Stop Evaluating LLMs on Vibes
- OSS library for attribution and interpretation methods for deep nets
What are some alternatives?
bytewax - Python Stream Processing
langfuse - 🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
risingwave - SQL stream processing, analytics, and management. We decouple storage and compute to offer speedy bootstrapping, dynamic scaling, time-travel queries, and efficient joins.
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Benthos - Fancy stream processing made operationally mundane
probability - Probabilistic reasoning and statistical analysis in TensorFlow
cli - Railway CLI
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
feldera - Feldera Continuous Analytics Platform
embedchain - Personalizing LLM Responses
timely-dataflow - A modular implementation of timely dataflow in Rust
machine_learning_basics - Plain python implementations of basic machine learning algorithms