pytest-codspeed
Pytest plugin to create CodSpeed benchmarks (by CodSpeedHQ)
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms (by carla-recourse)
pytest-codspeed | CARLA | |
---|---|---|
2 | 2 | |
67 | 274 | |
- | 0.0% | |
8.3 | 0.0 | |
5 days ago | about 1 year ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytest-codspeed
Posts with mentions or reviews of pytest-codspeed.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-10-31.
- CodSpeed – integrated CI tool for performance testing
-
Pinpoint performance regressions with CI-Integrated differential profiling
pytest-codspeed, plugin for pytest
CARLA
Posts with mentions or reviews of CARLA.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-09-29.
-
[R] CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Abstract: Counterfactual explanations provide means for prescriptive model explanations by suggesting actionable feature changes (e.g., increase income) that allow individuals to achieve favourable outcomes in the future (e.g., insurance approval). Choosing an appropriate method is a crucial aspect for meaningful counterfactual explanations. As documented in recent reviews, there exists a quickly growing literature with available methods. Yet, in the absence of widely available open–source implementations, the decision in favour of certain models is primarily based on what is readily available. Going forward – to guarantee meaningful comparisons across explanation methods – we present CARLA (Counterfactual And Recourse Library), a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models. In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods. We have open sourced CARLA and our experimental results on GitHub, making them available as competitive baselines. We welcome contributions from other research groups and practitioners.
-
University of Tübingen Researchers Open-Source ‘CARLA’, A Python Library for Benchmarking Counterfactual Explanation Methods Across Data Sets and Machine Learning Models
4 Min Read| Paper | Github