deepchecks
giskard
Our great sponsors
deepchecks | giskard | |
---|---|---|
15 | 7 | |
3,350 | 3,111 | |
3.2% | 15.7% | |
8.2 | 10.0 | |
10 days ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepchecks
-
Detect, Defend, Prevail: Payments Fraud Detection using ML & Deepchecks
Also if you have any confusion related to it. You can directly go to their discussion section in github :
- Deepchecks: Open-source ML testing and validation library
-
Deepchecks' New Open Source is on Product Hunt, and Needs Your Help
GitHub for Deepchecks: https://github.com/deepchecks/deepchecks
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
Data Validation tools
I use DeepChecks for my continuous training pipelines. You can check out the Data Integrity Checks.
- Deepchecks
- deepchecks: Test Suites for Validating ML Models & Data. Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort.
- QA help comes in many forms: Sometimes, from your heavily funded competitor
- Deepchecks: An open-source tool for testing machine learning models and data
-
Test suites for machine learning models in Python (New OSS package)
And if you liked the project, we'll be delighted to count you as one of our stargazers at https://github.com/deepchecks/deepchecks/stargazers!
giskard
- Show HN: Evaluate LLM-based RAG Applications with automated test set generation
-
Why is it so important to evaluate Large Language Models (LLMs)? 🤯🔥
Unchecked biases in LLMs can inadvertently perpetuate harmful stereotypes or produce misleading information, which in turn can produce severe consequences. In this article, we'll demonstrate how to evaluate your LLMs using an open source model testing framework, Giskard. 🤓
- The testing framework dedicated to ML models, from tabular to LLMs
-
Show HN: Python library to scan ML models for vulnerabilities
Hi! I’ve been working on this automatic scanner for ML models to detect issues like underperforming data slices, overconfidence in predictions, robustness problems, and others. It supports all main Python ML frameworks (sklearn, torch, xgboost, …) and integrates with the quality assurance solution we are building at Giskard AI (https://giskard.ai) to systematically test models before putting them in production.
It is still a beta and I would love to hear your feedback if you have the time to try it out.
We have quite a few tutorials in the docs with ready-made colab notebooks to make it easy to get started.
If you are interested in the code:
https://github.com/Giskard-AI/giskard/tree/main/python-clien...
-
[P] Open-source solution to scan AI models for vulnerabilities
Sure! Benjamini-Hochberg is a very good recommendation, much simpler than the alpha investing procedures I mentioned which makes it easily to implement in our case. I will give it a try, if there’s an easy way to set this up it could be included in some of the next releases. I’ll let you know. FYI, I added this to our issue tracker.
-
[R] LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs
This is super interesting! Thanks for sharing. We're also working on this research field from an open-source angle (https://github.com/Giskard-AI/giskard)
-
How are you testing your ML Systems?
Code repository: https://github.com/Giskard-AI/giskard
What are some alternatives?
great_expectations - Always know what to expect from your data.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
evidently - Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
PyBeam-QA - An simple GUI program for performing radiotherapy QA
model-validation-toolkit - Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.
awesome-ai-safety - 📚 A curated list of papers & technical articles on AI Quality & Safety
feast - Feature Store for Machine Learning
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.
MindsDB - The platform for customizing AI from enterprise data
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
lm-evaluation-harness - A framework for few-shot evaluation of language models.