Put-In-Context
Putting Visual Object Recognition in Context (by kreimanlab)
generative_inpainting
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral (by JiahuiYu)
Put-In-Context | generative_inpainting | |
---|---|---|
2 | 2 | |
17 | 3,155 | |
- | - | |
0.0 | 0.0 | |
over 2 years ago | over 2 years ago | |
MATLAB | Python | |
- | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Put-In-Context
Posts with mentions or reviews of Put-In-Context.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-06-07.
-
Putting visual recognition in context - Link to free zoom lecture by the authors in comments
Hi all, We do free zoom lectures for the reddit community. This talk will cover visual recognition networks and the role of contextual information Link to event (June 24): https://www.reddit.com/r/2D3DAI/comments/mr9nlj/putting\_visual\_recognition\_in\_context/ Talk is based on the speakers' papers: - Putting visual object recognition in context (CVPR2020) - Paper: https://arxiv.org/abs/1911.07349 - Git: https://github.com/kreimanlab/Put-In-Context - When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes - Paper: http://arxiv.org/abs/2104.02215 - Git: https://github.com/kreimanlab/WhenPigsFlyContext Talk abstract: Recent studies have shown that visual recognition networks can be fooled by placing objects in inconsistent contexts (e.g., a pig floating in the sky). This lecture covers two representative works modeling the role of contextual information in visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition. In the first work, we focused on the study of the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation on real-world images. In the second work, we explored more challenging properties of contextual modulation including gravity, object co-occurrences and relative sizes in synthetic environments. In both works, we conducted a series of experiments to gain insights into the impact of contextual cues on both human and machine vision: - Psycho-physics experiments to establish a human benchmark for out-of-context recognition and then compare it with state-of-the-art computer vision models to quantify the gap between the two. - We proposed new context-aware recognition models. The models captured useful information for contextual reasoning, enabling human-level performance and significantly better robustness in out-of-context conditions compared to baseline models across both synthetic and other existing out-of-context natural image datasets. Presenters BIO: - Philipp Bomatter is a master student for Computational Science and Engineering at ETH Zurich.He is interested in artificial intelligence and neuroscience and currently works on a project concerning contextual reasoning in vision at the Kreiman Lab at Harvard University. - Mengmi Zhang completed her PhD in the Graduate School for Integrative Sciences and Engineering, NUS in 2019. She is now a postdoc in KreimanLab in Children's Hospital, Harvard Medical School.Her research interests include computer vision, machine learning, and cognitive neuroscience. In particular, she studies high-level cognitive functions in humans including attention, memory, learning and reasoning from psychophysics experiments, machine learning approaches and neuroscience. (Talk will be recorded and uploaded to youtube, you can see all past lectures and recordings in /r/2D3DAI)
-
[R] Putting visual recognition in context - Link to free zoom lecture by the authors in comments
Git: https://github.com/kreimanlab/Put-In-Context
generative_inpainting
Posts with mentions or reviews of generative_inpainting.
We have used some of these posts to build our list of alternatives
and similar projects.
- after instantiating a graph, data or node needs to pass before it loads the parameters?
-
how to make older tensorflow work properly with eager execution?
tried to run https://github.com/JiahuiYu/generative_inpainting but code was coded in tensorflow 1.x
What are some alternatives?
When comparing Put-In-Context and generative_inpainting you can also consider the following projects:
a-PyTorch-Tutorial-to-Object-Detection - SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection
generative-inpainting-pytorch - A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
data-efficient-gans - [NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
PandaCrazy-Max - PandaCrazy Chrome Extension for Amazon Mturk
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
ailia-models - The collection of pre-trained, state-of-the-art AI models for ailia SDK
SINet - Camouflaged Object Detection, CVPR 2020 (Oral)
WhenPigsFlyContext
Put-In-Context vs a-PyTorch-Tutorial-to-Object-Detection
generative_inpainting vs generative-inpainting-pytorch
Put-In-Context vs generative-inpainting-pytorch
generative_inpainting vs data-efficient-gans
Put-In-Context vs PandaCrazy-Max
generative_inpainting vs pix2pixHD
Put-In-Context vs ailia-models
Put-In-Context vs SINet
Put-In-Context vs WhenPigsFlyContext