Mask_RCNN
Swin-Transformer-Object-Detection
Our great sponsors
Mask_RCNN | Swin-Transformer-Object-Detection | |
---|---|---|
28 | 4 | |
24,119 | 1,710 | |
0.8% | 0.7% | |
0.0 | 0.0 | |
19 days ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Mask_RCNN
-
Intuituvely Understanding Harris Corner Detector
The most widely used algorithms for classical feature detection today are "whatever opencv implements"
In terms of tech that's advancing at the moment? https://co-tracker.github.io/ if you want to track individual points, https://github.com/matterport/Mask_RCNN and its descendents if you want to detect, say, the cover of a book.
-
Analyze defects and errors in the created images
Mask R-CNN
-
List of AI-Models
Click to Learn more...
-
Thought Dump About Recent AI Advancements And Palantir
- Mask RCNN https://github.com/matterport/Mask_RCNN (open source, so also not Palantir's)
-
Why are python dependencies so broken?
pip install git+https://github.com/matterport/Mask_RCNN
-
DeepCreamPy & Hent-AI Guide: Installation and anime censorship removal (Version 2)
It is important to realize that to do its masking procedures, Hent-AI uses the Mask RCNN (MRCNN) package from Matterport. The problem with this version of MRCNN is that it is not compatible with Tensorflow 2.X versions, essentially limiting Hent-AI compatibility to strict Tensorflow 1.X versions. Since Tensorflow 1.15 is the last of the Tensorflow 1.X versions and uses CUDA 10.0, which supports a maximum compute capability of 7.5, this means that the last NVIDIA GPU series that is compatible with the original Hent-AI implementation is the RTX 2000 series. This is, of course, not optimal since it means that RTX 3000 series and later GPUs cannot be used despite their significant computing power and high VRAM.
-
[P] Mask R-CNN (matterport) does not generate masks or just generates them randomly
I read that it could bethe problem with scipy version (https://github.com/matterport/Mask_RCNN/issues/2122) so I downgraded it, I also tried to modify shift = np.array([0, 0, 1., 1.]) in utils.py but nothing helped.
-
Mask RCNN importing error
I am assuming you did a pip install of this github repository, or did you run pip install mrcnn. The mrcnn package on pypi is just an example package and doesn't have any useful functionality. In addition, where did you get the code from that you are trying to run, from someone else or did you write it yourself? Reason I am asking is because the import error is to be expected since there indeed is no InferenceConfig class defined in mrcnn.visualize.
- Maskrcnn - Mask r-cnn for object detection and segmentation
-
MRCNN TF==2.7.0
Hello AI learners, check out my own development of Mask-RCNN supporting Tensorflow2.7.0 and Keras2.8.0. This is an edit of MRCNN which supports Tensoflow1.0, only.
Swin-Transformer-Object-Detection
-
Transfer Learning on Swin Transformer as a backbone for instance segmentation using MRCNN
I'm currently trying to transfer learn a set of custom classes of fish, for instance segmentation. I have found the official implementation of Swin Transformer as a backbone for instance segmentation using MRCNN: https://github.com/SwinTransformer/Swin-Transformer-Object-Detection.
-
Advice on Masters project | Vision transformers
Hi, So my project is to do with object detection on trash in the wild on this fairly obscure dataset: http://tacodataset.org/ and I was thinking of applying vision transformers to it for feature extraction. I was thinking of taking the YOLOX implementation and swapping out the backbone with swin transformers and perform bunch of comparisons/experiments for the write up. Sort of like how they applied swin transformers to mask R-CNN here but I am struggling to understand where to begin.
-
[P] I implemented DeepMind's "Perceiver" in PyTorch
Yes, have a look at this paper.
-
[P] Code and pretrained models for Swin Transformer are released (SOTA models on COCO and ADE20K)
Object detection on COCO: https://github.com/SwinTransformer/Swin-Transformer-Object-Detection
What are some alternatives?
yolact - A simple, fully convolutional model for real-time instance segmentation.
YOLOX - YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
mmdetection - OpenMMLab Detection Toolbox and Benchmark
Video-Swin-Transformer - This is an official implementation for "Video Swin Transformers".
mmsegmentation - OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
Mask-RCNN-training-with-docker-containers-on-Sagemaker
Swin-Transformer-Semantic-Segmentation - This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.
Mask-RCNN-Implementation - Mask RCNN Implementation on Custom Data(Labelme)
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
yolact - Tensorflow 2.x implementation YOLACT
Swin-Transformer-Serve - Deploy Swin Transformer using TorchServe