Our great sponsors
-
roboflow-100-benchmark
Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets
-
roboflow-100-benchmark
Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets [Moved to: https://github.com/roboflow/roboflow-100-benchmark] (by roboflow-ai)
Thanks for sharing @jonbaer! I’m one of the co-founders of Roboflow. Some additional resources and context:
* Blog Post: https://blog.roboflow.com/roboflow-100/
* Paper: https://arxiv.org/abs/2211.13523
* Github: https://github.com/roboflow-ai/roboflow-100-benchmark
At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.
We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.
Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
Good idea. I haven’t looked too closely yet at the “hard” datasets.
We originally considered “fixing” the labels on these datasets by hand, but ultimately decided that label error is one of the challenges “real world” datasets have that models should work to become more robust against. There is some selection bias in that we did make sure that the datasets we chose passed the eye test (in other words, it looked like the user spent a considerable amount of time annotating & a sample of the images looked like they labeled some object of interest).
For aerial images in particular my guess would be that these models suffer from the “small object problem”[1] where the subjects are tiny compared to the size of the image. Trying a sliding window based approach like SAHI[2] on them would probably produce much better results (at the expense of much lower inference speed).
-
Haven't heard of those two, but would be really awesome to see an integration. We have an open API[1] for just this reason: we really want to make it easy to use (and source) your data across all the different tools out there. We've recently launched integrations with other labeling[2] and AutoML[3] tools (and have integrations with the big-cloud AutoML tools as well[4]). We're hoping to have a bunch more integrations with other MLOps tools & platforms in 2023.
Re synthetic data specifically, we've written a couple of how-to guides for creating data from context augmentation[5], Unity Perception[6], and Stable Diffusion[7] & are talking to some others as well; it seems like a natural integration point (and someplace where we don't need to reinvent the wheel).
[1] https://docs.roboflow.com/rest-api
[2] https://github.com/SkalskiP/make-sense/pull/298
[3] https://github.com/ultralytics/yolov5/discussions/10425
[4] https://docs.roboflow.com/train/pro-third-party-training-int...
[5] https://blog.roboflow.com/how-to-create-a-synthetic-dataset-...
[6] https://blog.roboflow.com/unity-perception-synthetic-dataset...
[7] https://blog.roboflow.com/synthetic-data-with-stable-diffusi...
-
Haven't heard of those two, but would be really awesome to see an integration. We have an open API[1] for just this reason: we really want to make it easy to use (and source) your data across all the different tools out there. We've recently launched integrations with other labeling[2] and AutoML[3] tools (and have integrations with the big-cloud AutoML tools as well[4]). We're hoping to have a bunch more integrations with other MLOps tools & platforms in 2023.
Re synthetic data specifically, we've written a couple of how-to guides for creating data from context augmentation[5], Unity Perception[6], and Stable Diffusion[7] & are talking to some others as well; it seems like a natural integration point (and someplace where we don't need to reinvent the wheel).
[1] https://docs.roboflow.com/rest-api
[2] https://github.com/SkalskiP/make-sense/pull/298
[3] https://github.com/ultralytics/yolov5/discussions/10425
[4] https://docs.roboflow.com/train/pro-third-party-training-int...
[5] https://blog.roboflow.com/how-to-create-a-synthetic-dataset-...
[6] https://blog.roboflow.com/unity-perception-synthetic-dataset...
[7] https://blog.roboflow.com/synthetic-data-with-stable-diffusi...
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
Related posts
- [Advice request] How on earth am I supposed to break into machine learning research as an undergraduate?
- [P] Image search with localization and open-vocabulary reranking.
- [P] Release of lightly 1.2.39 - A python library for self-supervised learning
- [R] Roboflow 100: An open source object detection benchmark of 224,714 labeled images in novel domains to compare model performance
- Introducing RF100: An open source object detection benchmark of 224,714 labeled images across 100 novel domains to compare model performance