ethics
giskard
ethics | giskard | |
---|---|---|
1 | 8 | |
265 | 4,359 | |
0.0% | 2.3% | |
0.0 | 9.8 | |
almost 2 years ago | 6 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ethics
-
[P] Request: Any datasets of morality stories?
Code for https://arxiv.org/abs/2008.02275 found: https://github.com/hendrycks/ethics
giskard
-
Creation of the ApostropheCMS Documentation Chatbot
When originally designing the chatbot, we opted to build it in Python, despite being a heavily JavaScript-oriented shop. This decision was driven by the availability of more mature analytic tools for objectively testing chatbot hallucination and accuracy in Python. So far, we've been evaluating answers qualitatively, but we plan to incorporate a tool like Giskard to bring a more quantitative approach to our evaluations. This step is crucial and one that, anecdotally, is often overlooked in many production chatbots.
- Show HN: Evaluate LLM-based RAG Applications with automated test set generation
-
Why is it so important to evaluate Large Language Models (LLMs)? 🤯🔥
Unchecked biases in LLMs can inadvertently perpetuate harmful stereotypes or produce misleading information, which in turn can produce severe consequences. In this article, we'll demonstrate how to evaluate your LLMs using an open source model testing framework, Giskard. 🤓
- The testing framework dedicated to ML models, from tabular to LLMs
-
Show HN: Python library to scan ML models for vulnerabilities
Hi! I’ve been working on this automatic scanner for ML models to detect issues like underperforming data slices, overconfidence in predictions, robustness problems, and others. It supports all main Python ML frameworks (sklearn, torch, xgboost, …) and integrates with the quality assurance solution we are building at Giskard AI (https://giskard.ai) to systematically test models before putting them in production.
It is still a beta and I would love to hear your feedback if you have the time to try it out.
We have quite a few tutorials in the docs with ready-made colab notebooks to make it easy to get started.
If you are interested in the code:
https://github.com/Giskard-AI/giskard/tree/main/python-clien...
-
[P] Open-source solution to scan AI models for vulnerabilities
Sure! Benjamini-Hochberg is a very good recommendation, much simpler than the alpha investing procedures I mentioned which makes it easily to implement in our case. I will give it a try, if there’s an easy way to set this up it could be included in some of the next releases. I’ll let you know. FYI, I added this to our issue tracker.
-
[R] LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs
This is super interesting! Thanks for sharing. We're also working on this research field from an open-source angle (https://github.com/Giskard-AI/giskard)
-
How are you testing your ML Systems?
Code repository: https://github.com/Giskard-AI/giskard
What are some alternatives?
ToolEmu - [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
moonwatcher - Evaluation & testing framework for computer vision models
burr - Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastructure.
natural-adv-examples - A Harder ImageNet Test Set (CVPR 2021)
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.