a-PyTorch-Tutorial-to-Object-Detection
Put-In-Context
Our great sponsors
a-PyTorch-Tutorial-to-Object-Detection | Put-In-Context | |
---|---|---|
3 | 2 | |
2,960 | 16 | |
- | - | |
4.9 | 0.0 | |
6 months ago | over 2 years ago | |
Python | MATLAB | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
a-PyTorch-Tutorial-to-Object-Detection
-
Beginner : Object (shape) detection in binary images
I have also experimented with SSD300 models from this example : https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection but again, I think the lack of RGB/greyscale data makes this largely useless ?
-
Learning resources: multi-object localization
Also, this SSD walkthrough is pretty good as well and goes into a lot more depth on the concepts.
-
What is an easy way to find an image within another image?
Something like this perhaps? https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection
Put-In-Context
-
Putting visual recognition in context - Link to free zoom lecture by the authors in comments
Hi all, We do free zoom lectures for the reddit community. This talk will cover visual recognition networks and the role of contextual information Link to event (June 24): https://www.reddit.com/r/2D3DAI/comments/mr9nlj/putting\_visual\_recognition\_in\_context/ Talk is based on the speakers' papers: - Putting visual object recognition in context (CVPR2020) - Paper: https://arxiv.org/abs/1911.07349 - Git: https://github.com/kreimanlab/Put-In-Context - When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes - Paper: http://arxiv.org/abs/2104.02215 - Git: https://github.com/kreimanlab/WhenPigsFlyContext Talk abstract: Recent studies have shown that visual recognition networks can be fooled by placing objects in inconsistent contexts (e.g., a pig floating in the sky). This lecture covers two representative works modeling the role of contextual information in visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition. In the first work, we focused on the study of the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation on real-world images. In the second work, we explored more challenging properties of contextual modulation including gravity, object co-occurrences and relative sizes in synthetic environments. In both works, we conducted a series of experiments to gain insights into the impact of contextual cues on both human and machine vision: - Psycho-physics experiments to establish a human benchmark for out-of-context recognition and then compare it with state-of-the-art computer vision models to quantify the gap between the two. - We proposed new context-aware recognition models. The models captured useful information for contextual reasoning, enabling human-level performance and significantly better robustness in out-of-context conditions compared to baseline models across both synthetic and other existing out-of-context natural image datasets. Presenters BIO: - Philipp Bomatter is a master student for Computational Science and Engineering at ETH Zurich.He is interested in artificial intelligence and neuroscience and currently works on a project concerning contextual reasoning in vision at the Kreiman Lab at Harvard University. - Mengmi Zhang completed her PhD in the Graduate School for Integrative Sciences and Engineering, NUS in 2019. She is now a postdoc in KreimanLab in Children's Hospital, Harvard Medical School.Her research interests include computer vision, machine learning, and cognitive neuroscience. In particular, she studies high-level cognitive functions in humans including attention, memory, learning and reasoning from psychophysics experiments, machine learning approaches and neuroscience. (Talk will be recorded and uploaded to youtube, you can see all past lectures and recordings in /r/2D3DAI)
-
[R] Putting visual recognition in context - Link to free zoom lecture by the authors in comments
Git: https://github.com/kreimanlab/Put-In-Context
What are some alternatives?
mmrotate - OpenMMLab Rotated Object Detection Toolbox and Benchmark
generative-inpainting-pytorch - A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
SSD-pytorch - SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity
PandaCrazy-Max - PandaCrazy Chrome Extension for Amazon Mturk
mmdetection - OpenMMLab Detection Toolbox and Benchmark
ailia-models - The collection of pre-trained, state-of-the-art AI models for ailia SDK
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
generative_inpainting - DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
CRAFT-pytorch - Official implementation of Character Region Awareness for Text Detection (CRAFT)
SINet - Camouflaged Object Detection, CVPR 2020 (Oral)
ssd_keras - A Keras port of Single Shot MultiBox Detector
WhenPigsFlyContext