roboflow-100-benchmark

Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets (by roboflow)

Roboflow-100-benchmark Alternatives

Similar projects and alternatives to roboflow-100-benchmark

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better roboflow-100-benchmark alternative or higher similarity.

roboflow-100-benchmark reviews and mentions

Posts with mentions or reviews of roboflow-100-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-20.
  • AI That Teaches Other AI
    4 projects | news.ycombinator.com | 20 Jul 2023
    > Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.

    This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.

    Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.

    Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.

    > The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.

    So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).

    [1] https://github.com/autodistill/autodistill

    [2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...

    [3] https://www.rf100.org

  • Roboflow 100: A New Object Detection Benchmark
    5 projects | news.ycombinator.com | 28 Dec 2022
  • [R] Roboflow 100: An open source object detection benchmark of 224,714 labeled images in novel domains to compare model performance
    2 projects | /r/MachineLearning | 1 Dec 2022
    I'm Jacob, one of the authors of Roboflow 100, A Rich Multi-Domain Object Detection Benchmark, and I am excited to share our work with the community. In object detection, researchers are benchmarking their models on primarily COCO, and in many ways, it seems like a lot of these models are getting close to a saturation point. In practice, everyone is taking these models and finetuning them on their own custom dataset domains, which may vary from tagging swimming pools from Google Maps, to identifying defects in cell phones on an industrial line. We did some work to collect a representative benchmark of these custom domain problems by selecting from over 100,000 public projects on Roboflow Universe into 100 semantically diverse object detection datasets. Our benchmark comprises of 224,714 images, 11,170 labeling hourse, and 829 classes from the community for benchmarking on novel tasks. We also tried out the benchmark on a few popular models - comparing YOLOv5, YOLOv7, and the zero shot capabilities of GLIP. Use the benchmark here: https://github.com/roboflow-ai/roboflow-100-benchmark Paper link here: https://arxiv.org/pdf/2211.13523.pdf Or simply learn more here: https://www.rf100.org/ An immense thanks to the community, like this one, for making it possible to make this benchmark - we hope it moves the field forward! I'm around for any questions!
  • Introducing RF100: An open source object detection benchmark of 224,714 labeled images across 100 novel domains to compare model performance
    2 projects | /r/computervision | 29 Nov 2022
    Or simply learn more: https://www.rf100.org/
  • We took YOLOv5 and YOLOv7, trained them on 100 datasets, and compared their accuracy! 🔥 The results may surprise you.
    1 project | /r/computervision | 29 Nov 2022
    github repository: https://github.com/roboflow-ai/roboflow-100-benchmark blogpost: https://blog.roboflow.com/roboflow-100/ arXiv paper: https://arxiv.org/abs/2211.13523
  • Show HN: Real-World Datasets for Benchmarking Object Detection Models
    1 project | news.ycombinator.com | 29 Nov 2022
    Github: https://github.com/roboflow-ai/roboflow-100-benchmark

    At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.

    We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.

    Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.

    We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.

    We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.

  • A note from our sponsor - SaaSHub
    www.saashub.com | 7 May 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic roboflow-100-benchmark repo stats
8
227
0.6
7 months ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com