datasets
evaluate
Our great sponsors
datasets | evaluate | |
---|---|---|
15 | 3 | |
18,345 | 1,803 | |
1.5% | 3.8% | |
9.5 | 5.2 | |
7 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
datasets
- 🐍🐍 23 issues to grow yourself as an exceptional open-source Python expert 🧑💻 🥇
- Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
- How to Train Large Models on Many GPUs?
-
[D] Can we use Ray for distributed training on vertex ai ? Can someone provide me examples for the same ? Also which dataframe libraries you guys used for training machine learning models on huge datasets (100 gb+) (because pandas can't handle huge data).
https://huggingface.co/docs/datasets backed with an Arrow file or buffer
- Need help with a data science project
-
Is there a text evaluation metric that does not need reference text?
I'm looking for an automatic evaluation metric that can score the first text higher (since it's more grammatically correct/better for other reasons). All the metrics for NLG I found require some reference text to match the generated text with, which I don't have.
-
FauxPilot – an open-source GitHub Copilot server
And then pass that my_code.json as the dataset name.
-
Hugging Face Introduces ‘Datasets’: A Lightweight Community Library For Natural Language Processing (NLP)
Code for https://arxiv.org/abs/2109.02846 found: https://github.com/huggingface/datasets
Quick Read | Paper | Github
- Datasets: A Community Library for Natural Language Processing
evaluate
- [D] The MMSegmentation library from OpenMMLab appears to return the wrong results when computing basic image segmentation metrics such as the Jaccard index (IoU - intersection-over-union). It appears to compute recall (sensitivity) instead of IoU, which artificially inflates the performance metrics.
- [P] Releasing 🤗 Evaluate - an evaluation library for ML
- HuggingFace/evaluate: A library for easily evaluating ML models and datasets
What are some alternatives?
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
torch-fidelity - High-fidelity performance metrics for generative models in PyTorch
datumaro - Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.
EvalAI - :cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
cypress-realworld-app - A payment application to demonstrate real-world usage of Cypress testing methods, patterns, and workflows.
avalanche - Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
edex-ui - A cross-platform, customizable science fiction terminal emulator with advanced monitoring & touchscreen support.
semantic-kitti-api - SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
first-contributions - 🚀✨ Help beginners to contribute to open source projects
pycm - Multi-class confusion matrix library in Python
frankmocap - A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator
rexmex - A general purpose recommender metrics library for fair evaluation.