WhenPigsFlyContext VS Put-In-Context

Compare WhenPigsFlyContext vs Put-In-Context and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
WhenPigsFlyContext Put-In-Context
2 2
16 16
- -
0.0 0.0
almost 2 years ago over 2 years ago
Jupyter Notebook MATLAB
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

WhenPigsFlyContext

Posts with mentions or reviews of WhenPigsFlyContext. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-07.
  • Putting visual recognition in context - Link to free zoom lecture by the authors in comments
    2 projects | /r/deeplearning | 7 Jun 2021
    Hi all, We do free zoom lectures for the reddit community. This talk will cover visual recognition networks and the role of contextual information Link to event (June 24): https://www.reddit.com/r/2D3DAI/comments/mr9nlj/putting\_visual\_recognition\_in\_context/ Talk is based on the speakers' papers: - Putting visual object recognition in context (CVPR2020) - Paper: https://arxiv.org/abs/1911.07349 - Git: https://github.com/kreimanlab/Put-In-Context - When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes - Paper: http://arxiv.org/abs/2104.02215 - Git: https://github.com/kreimanlab/WhenPigsFlyContext Talk abstract: Recent studies have shown that visual recognition networks can be fooled by placing objects in inconsistent contexts (e.g., a pig floating in the sky). This lecture covers two representative works modeling the role of contextual information in visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition. In the first work, we focused on the study of the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation on real-world images. In the second work, we explored more challenging properties of contextual modulation including gravity, object co-occurrences and relative sizes in synthetic environments. In both works, we conducted a series of experiments to gain insights into the impact of contextual cues on both human and machine vision: - Psycho-physics experiments to establish a human benchmark for out-of-context recognition and then compare it with state-of-the-art computer vision models to quantify the gap between the two. - We proposed new context-aware recognition models. The models captured useful information for contextual reasoning, enabling human-level performance and significantly better robustness in out-of-context conditions compared to baseline models across both synthetic and other existing out-of-context natural image datasets. Presenters BIO: - Philipp Bomatter is a master student for Computational Science and Engineering at ETH Zurich.He is interested in artificial intelligence and neuroscience and currently works on a project concerning contextual reasoning in vision at the Kreiman Lab at Harvard University. - Mengmi Zhang completed her PhD in the Graduate School for Integrative Sciences and Engineering, NUS in 2019. She is now a postdoc in KreimanLab in Children's Hospital, Harvard Medical School.Her research interests include computer vision, machine learning, and cognitive neuroscience. In particular, she studies high-level cognitive functions in humans including attention, memory, learning and reasoning from psychophysics experiments, machine learning approaches and neuroscience. (Talk will be recorded and uploaded to youtube, you can see all past lectures and recordings in /r/2D3DAI)
  • [R] Putting visual recognition in context - Link to free zoom lecture by the authors in comments
    2 projects | /r/MachineLearning | 18 Apr 2021
    Git: https://github.com/kreimanlab/WhenPigsFlyContext

Put-In-Context

Posts with mentions or reviews of Put-In-Context. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-07.
  • Putting visual recognition in context - Link to free zoom lecture by the authors in comments
    2 projects | /r/deeplearning | 7 Jun 2021
    Hi all, We do free zoom lectures for the reddit community. This talk will cover visual recognition networks and the role of contextual information Link to event (June 24): https://www.reddit.com/r/2D3DAI/comments/mr9nlj/putting\_visual\_recognition\_in\_context/ Talk is based on the speakers' papers: - Putting visual object recognition in context (CVPR2020) - Paper: https://arxiv.org/abs/1911.07349 - Git: https://github.com/kreimanlab/Put-In-Context - When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes - Paper: http://arxiv.org/abs/2104.02215 - Git: https://github.com/kreimanlab/WhenPigsFlyContext Talk abstract: Recent studies have shown that visual recognition networks can be fooled by placing objects in inconsistent contexts (e.g., a pig floating in the sky). This lecture covers two representative works modeling the role of contextual information in visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition. In the first work, we focused on the study of the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation on real-world images. In the second work, we explored more challenging properties of contextual modulation including gravity, object co-occurrences and relative sizes in synthetic environments. In both works, we conducted a series of experiments to gain insights into the impact of contextual cues on both human and machine vision: - Psycho-physics experiments to establish a human benchmark for out-of-context recognition and then compare it with state-of-the-art computer vision models to quantify the gap between the two. - We proposed new context-aware recognition models. The models captured useful information for contextual reasoning, enabling human-level performance and significantly better robustness in out-of-context conditions compared to baseline models across both synthetic and other existing out-of-context natural image datasets. Presenters BIO: - Philipp Bomatter is a master student for Computational Science and Engineering at ETH Zurich.He is interested in artificial intelligence and neuroscience and currently works on a project concerning contextual reasoning in vision at the Kreiman Lab at Harvard University. - Mengmi Zhang completed her PhD in the Graduate School for Integrative Sciences and Engineering, NUS in 2019. She is now a postdoc in KreimanLab in Children's Hospital, Harvard Medical School.Her research interests include computer vision, machine learning, and cognitive neuroscience. In particular, she studies high-level cognitive functions in humans including attention, memory, learning and reasoning from psychophysics experiments, machine learning approaches and neuroscience. (Talk will be recorded and uploaded to youtube, you can see all past lectures and recordings in /r/2D3DAI)
  • [R] Putting visual recognition in context - Link to free zoom lecture by the authors in comments
    2 projects | /r/MachineLearning | 18 Apr 2021
    Git: https://github.com/kreimanlab/Put-In-Context

What are some alternatives?

When comparing WhenPigsFlyContext and Put-In-Context you can also consider the following projects:

a-PyTorch-Tutorial-to-Object-Detection - SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection

generative-inpainting-pytorch - A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)

PandaCrazy-Max - PandaCrazy Chrome Extension for Amazon Mturk

ailia-models - The collection of pre-trained, state-of-the-art AI models for ailia SDK

generative_inpainting - DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

SINet - Camouflaged Object Detection, CVPR 2020 (Oral)