deepeval
chdb
deepeval | chdb | |
---|---|---|
22 | 18 | |
1,923 | 1,736 | |
20.2% | 5.3% | |
9.9 | 9.5 | |
2 days ago | 13 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepeval
-
Unit Testing LLMs with DeepEval
For the last year I have been working with different LLMs (OpenAI, Claude, Palm, Gemini, etc) and I have been impressed with their performance. With the rapid advancements in AI and the increasing complexity of LLMs, it has become crucial to have a reliable testing framework that can help us maintain the quality of our prompts and ensure the best possible outcomes for our users. Recently, I discovered DeepEval (https://github.com/confident-ai/deepeval), an LLM testing framework that has revolutionized the way we approach prompt quality assurance.
-
Show HN: Ragas – the de facto open-source standard for evaluating RAG pipelines
Checkout this instead: https://github.com/confident-ai/deepeval
Also has native ragas implementation but supports all models.
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
-
Implemented 12+ LLM evaluation metrics so you don't have to
A link to a reddit post (with no discussion) which links to this repo
https://github.com/confident-ai/deepeval
- Show HN: I implemented a range of evaluation metrics for LLMs that runs locally
-
These 5 Open Source AI Startups are changing the AI Landscape
Star DeepEval on GitHub and contribute to the advancement of LLM evaluation frameworks! 🌟
- FLaNK Stack Weekly 06 Nov 2023
-
Why we replaced Pinecone with PGVector 😇
Pinecone, the leading closed-source vector database provider, is known for being fast, scalable, and easy to use. Its ability to allow users to perform blazing-fast vector search makes it a popular choice for large-scale RAG applications. Our initial infrastructure for Confident AI, the world’s first open-source evaluation infrastructure for LLMs, utilized Pinecone to cluster LLM observability log data in production. However, after weeks of experimentation, we made the decision to replace it entirely with pgvector. Pinecone’s simplistic design is deceptive due to several hidden complexities, particularly in integrating with existing data storage solutions. For example, it forces a complicated architecture and its restrictive metadata storage capacity made it troublesome for managing data-intensive workloads.
- Show HN: Unit Testing for LLMs
- Show HN: DeepEval – Unit Testing for LLMs (Open Science)
chdb
- FLaNK Stack Weekly 06 Nov 2023
-
DB Pilot: Query Postgres, files, S3 and more – all at once, from your laptop
Hey HN, creator of DB Pilot here.
I first announced DB Pilot on HN back in April: https://news.ycombinator.com/item?id=35761979.
Since then a lot has improved: More databases are supported, most of the product can now be used for free, and most importantly:
The app now comes with an analytics workspace powered by an embedded ClickHouse instance, running locally on your machine. This allows you to query local files, files on S3, PostgreSQL, SQLite & more - and all of those at once.
Embedding ClickHouse was possible thanks to chDB (https://github.com/chdb-io/chdb). A recent discussion on HN about it: https://news.ycombinator.com/item?id=37985005
- ChDB: Embedded OLAP SQL Engine Powered by ClickHouse
-
DuckDB 0.9.0
I recommend using ClickHouse instead of DuckDB.
It has been around since 2016, and it covers and extends the feature set of DuckDB with a huge margin. Worth noting that it never has breaking changes in its table format MergeTree.
I'm tracking the progress of DuckDB and see that it is modeled after ClickHouse, but does not approach it in terms of feature completeness, stability, or performance.
The closest to DuckDB option is to use its self-contained version, clickhouse-local: https://clickhouse.com/blog/extracting-converting-querying-l... or an embedded version, chdb: https://github.com/chdb-io/chdb
-
Is ClickHouse Moving Away from Open Source?
Different beasts, but if by any chance you love ClickHouse already and just want to run OLAP queries in-process, there's chdb: https://github.com/chdb-io/chdb
- ChDB: An Embedded OLAP SQL Engine Powered by ClickHouse
-
PRQL, Pipelined Relational Query Language
> Can you embed it in Python as a library?
https://github.com/chdb-io/chdb
pip install chdb
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
New kid on the block that I prefer over DuckDB is CHDB (https://github.com/chdb-io/chdb). Embedded ClickHouse, so once you out grow your laptop you can simply move to an actual OLAP that's Open-source.
- ClickHouse-local and chdb performance issue on clickbench Q.23 Q28
What are some alternatives?
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
risingwave - SQL stream processing, analytics, and management. We decouple storage and compute to offer instant failover, dynamic scaling, speedy bootstrapping, and efficient joins.
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
openvino_notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
blog-examples
duckdb-wasm - WebAssembly version of DuckDB
chdb-cli - Simple CLI / REPL for chdb made in Python
pezzo - 🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.
sqlite_blaster_python - A library for creating huge Sqlite indexes at breakneck speeds
tailspin - 🌀 A log file highlighter
glaredb - GlareDB: An analytics DBMS for distributed data