food-recognition-benchmark-starter-kit
TACO
food-recognition-benchmark-starter-kit | TACO | |
---|---|---|
3 | 3 | |
66 | 557 | |
- | - | |
0.0 | 0.0 | |
8 months ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
food-recognition-benchmark-starter-kit
-
54392 real-world Food Images with 100,256 annotations
âšī¸ What's unique? These are real images of real food - not a biased data set downloaded from food websites. The images were collected by participants in the foodandyou_ch study (and released with their consent đ). đŧī¸ Dataset preview: https://i.imgur.com/BjH4ypx.png đ Download and know more about the dataset: https://www.aicrowd.com/challenges/food-recognition-benchmark-2022#datasets đ¤ Pre-trained MMdetection & Detectron2 models on this dataset: https://github.com/AIcrowd/food-recognition-benchmark-starter-kit đ Open Benchmark (if you are interested in the ML part): https://www.aicrowd.com/challenges/food-recognition-benchmark-2022/leaderboards
đ¤ Pre-trained MMdetection & Detectron2 models on this dataset: https://github.com/AIcrowd/food-recognition-benchmark-starter-kit
- Dataset containing 54392 real-world Food Images [and computer vision benchmark]
TACO
-
Does a high tech Trash can đ that sorts out plastic and trash out by scanning exist?
http://tacodataset.org/ <- Open source dataset if you want to train a classifier, I like this one
-
Advice on Masters project | Vision transformers
Hi, So my project is to do with object detection on trash in the wild on this fairly obscure dataset: http://tacodataset.org/ and I was thinking of applying vision transformers to it for feature extraction. I was thinking of taking the YOLOX implementation and swapping out the backbone with swin transformers and perform bunch of comparisons/experiments for the write up. Sort of like how they applied swin transformers to mask R-CNN here but I am struggling to understand where to begin.
-
How to convert Polygons to Bounding Boxes?
I was wondering if anyone had a script or could point me to one that would be able to convert polygons from image segmentation to bounding boxes for object detection. I am looking to create a trash detector to run on my trash picking up robot. I found the TACO dataset, but it uses segmentation and I just want to start with bounding boxes. Any help would be appreciated.
What are some alternatives?
car-damage-detection - Detectron2 for car damage detection using custom dataset
YOLOX - YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
datasets - đ 5,400,000+ Unsplash images made available for research and machine learning
yolov3-tf2 - YoloV3 Implemented in Tensorflow 2.0
covid-chestxray-dataset - We are building an open database of COVID-19 cases with chest X-ray or CT images.
Mask-RCNN-Implementation - Mask RCNN Implementation on Custom Data(Labelme)
TrainYourOwnYOLO - Train a state-of-the-art yolov3 object detector from scratch!
revery - :zap: Native, high-performance, cross-platform desktop apps - built with Reason!
theme-ui - Build consistent, themeable React apps based on constraint-based design principles
Swin-Transformer-Object-Detection - This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.
glasgow-litter - A project that explores the relationship between deprivation and litter in Glasgow City. đ¯