rexmex
evaluate
rexmex | evaluate | |
---|---|---|
1 | 3 | |
276 | 1,830 | |
0.0% | 3.6% | |
5.5 | 6.1 | |
9 months ago | 9 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rexmex
evaluate
- [D] The MMSegmentation library from OpenMMLab appears to return the wrong results when computing basic image segmentation metrics such as the Jaccard index (IoU - intersection-over-union). It appears to compute recall (sensitivity) instead of IoU, which artificially inflates the performance metrics.
- [P] Releasing 🤗 Evaluate - an evaluation library for ML
- HuggingFace/evaluate: A library for easily evaluating ML models and datasets
What are some alternatives?
torch-fidelity - High-fidelity performance metrics for generative models in PyTorch
CSrankings - A web app for ranking computer science departments according to their research output in selective venues, and for finding active faculty across a wide range of areas.
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.
EvalAI - :cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
ranking - Learning to Rank in TensorFlow
avalanche - Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
semantic-kitti-api - SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
pycm - Multi-class confusion matrix library in Python