roboflow-100-benchmark VS sahi

Compare roboflow-100-benchmark vs sahi and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
roboflow-100-benchmark sahi
8 11
227 3,580
4.0% 2.3%
0.6 7.4
6 months ago 4 days ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

roboflow-100-benchmark

Posts with mentions or reviews of roboflow-100-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-20.
  • AI That Teaches Other AI
    4 projects | news.ycombinator.com | 20 Jul 2023
    > Their SKILL tool involves a set of algorithms that make the process go much faster, they said, because the agents learn at the same time in parallel. Their research showed if 102 agents each learn one task and then share, the amount of time needed is reduced by a factor of 101.5 after accounting for the necessary communications and knowledge consolidation among agents.

    This is a really interesting idea. It's like the reverse of knowledge distillation (which I've been thinking about a lot[1]) where you have one giant model that knows a lot about a lot & you use that model to train smaller, faster models that know a lot about a little.

    Instead, you if you could train a lot of models that know a lot about a little (which is a lot less computationally intensive because the problem space is so confined) and combine them into a generalized model, that'd be hugely beneficial.

    Unfortunately, after a bit of digging into the paper & Github repo[2], this doesn't seem to be what's happening at all.

    > The code will learn 102 small and separte heads(either a linear head or a linear head with a task bias) for each tasks respectively in order. This step can be parallized on multiple GPUS with one task per GPU. The heads will be saved in the weight folder. After that, the code will learn a task mapper(Either using GMMC or Mahalanobis) to distinguish image task-wisely. Then, all images will be evaluated in the same time without a task label.

    So the knowledge isn't being combined (and the agents aren't learning from each other) into a generalized model. They're just training a bunch of independent models for specific tasks & adding a model-selection step that maps an image to the most relevant "expert". My guess is you could do the same thing using CLIP vectors as the routing method to supervised models trained on specific datasets (we found that datasets largely live in distinct regions of CLIP-space[3]).

    [1] https://github.com/autodistill/autodistill

    [2] https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learnin...

    [3] https://www.rf100.org

  • Roboflow 100: A New Object Detection Benchmark
    5 projects | news.ycombinator.com | 28 Dec 2022
  • [R] Roboflow 100: An open source object detection benchmark of 224,714 labeled images in novel domains to compare model performance
    2 projects | /r/MachineLearning | 1 Dec 2022
    I'm Jacob, one of the authors of Roboflow 100, A Rich Multi-Domain Object Detection Benchmark, and I am excited to share our work with the community. In object detection, researchers are benchmarking their models on primarily COCO, and in many ways, it seems like a lot of these models are getting close to a saturation point. In practice, everyone is taking these models and finetuning them on their own custom dataset domains, which may vary from tagging swimming pools from Google Maps, to identifying defects in cell phones on an industrial line. We did some work to collect a representative benchmark of these custom domain problems by selecting from over 100,000 public projects on Roboflow Universe into 100 semantically diverse object detection datasets. Our benchmark comprises of 224,714 images, 11,170 labeling hourse, and 829 classes from the community for benchmarking on novel tasks. We also tried out the benchmark on a few popular models - comparing YOLOv5, YOLOv7, and the zero shot capabilities of GLIP. Use the benchmark here: https://github.com/roboflow-ai/roboflow-100-benchmark Paper link here: https://arxiv.org/pdf/2211.13523.pdf Or simply learn more here: https://www.rf100.org/ An immense thanks to the community, like this one, for making it possible to make this benchmark - we hope it moves the field forward! I'm around for any questions!
  • Introducing RF100: An open source object detection benchmark of 224,714 labeled images across 100 novel domains to compare model performance
    2 projects | /r/computervision | 29 Nov 2022
    Or simply learn more: https://www.rf100.org/
  • We took YOLOv5 and YOLOv7, trained them on 100 datasets, and compared their accuracy! 🔥 The results may surprise you.
    1 project | /r/computervision | 29 Nov 2022
    github repository: https://github.com/roboflow-ai/roboflow-100-benchmark blogpost: https://blog.roboflow.com/roboflow-100/ arXiv paper: https://arxiv.org/abs/2211.13523
  • Show HN: Real-World Datasets for Benchmarking Object Detection Models
    1 project | news.ycombinator.com | 29 Nov 2022
    Github: https://github.com/roboflow-ai/roboflow-100-benchmark

    At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.

    We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.

    Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.

    We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.

    We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.

sahi

Posts with mentions or reviews of sahi. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-22.
  • How to Detect Small Objects
    3 projects | dev.to | 22 Apr 2024
    An alternative to this is to leverage existing object detection, apply the model to patches or slices of fixed size in our image, and then stitch the results together. This is the idea behind Slicing-Aided Hyper Inference!
  • Small-Object Detection using YOLOv8
    1 project | /r/computervision | 15 Aug 2023
    Hi All, I am trying to detect defects in the images using YOLOv8where some of the classes (defectType1, defectType2) have very small bounding boxes and some of them have large bounding boxes associated with the, (defectType3, defectType4). Also, real-time operation is desired (at least 5Hz on Jetson Xavier) What I have done till now: I am primarily trying to use the SAHI technique (Slicing Aided Hyper Inference)
  • Changing labels of default YOLOv5 model
    2 projects | /r/learnmachinelearning | 12 Jul 2023
    I am using the default YOLOv5m6 model here with sahi/yolov5 library for my object detection project. I want to change just some of labels - for example when YOLO detects a human, I want it to label the human as "threat", not "person". Is there any way I can do it just changing some code, or I should train the model from scratch by just changing labels?
  • Which Azure service to host this ML model
    1 project | /r/AZURE | 29 May 2023
    I need to execute this model https://github.com/obss/sahi upon an HTTP request. I will need between 32GB and 128GB of RAM (depending on the request). Also, I will only receive this request once or twice a week (they are not predefined dates). Each process may take a few hours.
  • Library for chopping image in pieces for training
    1 project | /r/deeplearning | 9 May 2023
    https://github.com/obss/sahi should do the job
  • Semantic Segmentation with 2048x1024 images
    1 project | /r/computervision | 5 Mar 2023
    I think you have multiple options: why run inference on this large resolution? Why not run on 1024x512 or smaller. Use a smaller model which uses less memory, eg enet, erfnet, bisenet etc. Otherwise, patchbased inference is the way to go, there is a nice library, but also easy to implement yourself: https://github.com/obss/sahi
  • How to convert big TIF image to smaller jpgs
    1 project | /r/computervision | 12 Jan 2023
    i have the EXACT thing ! the libs github!
  • Roboflow 100: A New Object Detection Benchmark
    5 projects | news.ycombinator.com | 28 Dec 2022
    Good idea. I haven’t looked too closely yet at the “hard” datasets.

    We originally considered “fixing” the labels on these datasets by hand, but ultimately decided that label error is one of the challenges “real world” datasets have that models should work to become more robust against. There is some selection bias in that we did make sure that the datasets we chose passed the eye test (in other words, it looked like the user spent a considerable amount of time annotating & a sample of the images looked like they labeled some object of interest).

    For aerial images in particular my guess would be that these models suffer from the “small object problem”[1] where the subjects are tiny compared to the size of the image. Trying a sliding window based approach like SAHI[2] on them would probably produce much better results (at the expense of much lower inference speed).

    [1] https://blog.roboflow.com/detect-small-objects/

    [2] https://github.com/obss/sahi

  • Diffusion model for synthetc data generation
    1 project | /r/deeplearning | 17 Oct 2022
    I am not very experienced, but do I understand that the problem is the size of the image? If so, have you heard of sahi
  • Which model is best for detecting small objects? Yolov3? MaskRCNN, Faster-RCNN?
    2 projects | /r/computervision | 26 May 2022
    Try slicing and yolov4. https://github.com/obss/sahi

What are some alternatives?

When comparing roboflow-100-benchmark and sahi you can also consider the following projects:

Shared-Knowledge-Lifelong-Learnin

mmdetection - OpenMMLab Detection Toolbox and Benchmark

Shared-Knowledge-Lifelong-Learning - [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning

PixelLib - Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/

make-sense - Free to use online tool for labelling photos. https://makesense.ai

darknet - YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

roboflow-100-benchmark - Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets [Moved to: https://github.com/roboflow/roboflow-100-benchmark]

mask-rcnn - Mask-RCNN training and prediction in MATLAB for Instance Segmentation

fasterrcnn-pytorch-training-pipeline - PyTorch Faster R-CNN Object Detection on Custom Dataset

awesome-tiny-object-detection - 🕶 A curated list of Tiny Object Detection papers and related resources.

autodistill - Images to inference with no labeling (use foundation models to train supervised models).

fastdup - fastdup is a powerful free tool designed to rapidly extract valuable insights from your image & video datasets. Assisting you to increase your dataset images & labels quality and reduce your data operations costs at an unparalleled scale.