pytest-benchmark
CARLA
pytest-benchmark | CARLA | |
---|---|---|
2 | 2 | |
1,200 | 265 | |
- | 1.1% | |
6.0 | 0.0 | |
about 2 months ago | 7 months ago | |
Python | Python | |
BSD 2-clause "Simplified" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytest-benchmark
-
Pinpoint performance regressions with CI-Integrated differential profiling
pytest-benchmark
-
Investigating Pydantic v2's Bold Performance Claims
To test this, we will setup some benchmarks using pytest-benchmark, some sample data with a simple schema, and compare results between Python's dataclass, Pydantic v1, and v2.
CARLA
-
[R] CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Abstract: Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open–source implementations, the decision in favour of certain models is primarily based on what is readily available. Going forward – to guarantee meaningful comparisons across explanation methods – we present CARLA (Counterfactual And Recourse Library), a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models. In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods. We have open sourced CARLA and our experimental results on GitHub, making them available as competitive baselines. We welcome contributions from other research groups and practitioners.
-
University of Tübingen Researchers Open-Source ‘CARLA’, A Python Library for Benchmarking Counterfactual Explanation Methods Across Data Sets and Machine Learning Models
4 Min Read| Paper | Github
What are some alternatives?
pytest-codspeed - Pytest plugin to create CodSpeed benchmarks
carla - Open-source simulator for autonomous driving research.
pydantic-core - Core validation logic for pydantic written in rust
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
action - Github Actions for running CodSpeed in your CI
rliable - [NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
cloud_benchmarker - Cloud Benchmarker automates performance testing of cloud instances, offering insightful charts and tracking over time.
alibi - Algorithms for explaining machine learning models
pyperf - Toolkit to run Python benchmarks
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
DiCE - Generate Diverse Counterfactual Explanations for any machine learning model.