transfiner
Efficient-AI-Backbones
Our great sponsors
transfiner | Efficient-AI-Backbones | |
---|---|---|
3 | 3 | |
516 | 3,804 | |
2.3% | 3.2% | |
0.0 | 5.8 | |
over 1 year ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transfiner
-
I trained a neural net to watch Super Smash Bros
ok cool. Yeah it looked like you were playing with pointrend and I was wondering if it was transfiner or not.
-
[D] Alleged academic fraud of "Mask Transfiner for High-Quality Instance Segmentation" (arxiv 2111.13673)
When I visited the project page, I found this issue: Unfair Comparison with HTC and RefineMask? And I think most suspicions are justified. To summarize,
-
[R][P] Mask Transfiner for High-Quality Instance Segmentation + Gradio Web Demo
github: https://github.com/SysCV/transfiner
Efficient-AI-Backbones
-
Researchers From China Introduce Vision GNN (ViG): A Graph Neural Network For Computer Vision Systems
Continue reading | Check out the paper, github
- GNN for computer vision, beating CNN & Transformer
-
GNN can also work well on computer vision
Vision GNN: An Image is Worth Graph of Nodes Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTroch code will be available at https://github.com/huawei-noah/CV-Backbones.
What are some alternatives?
DAD-3DHeads - Official repo for DAD-3DHeads: A Large-scale Dense, Accurate and Diverse Dataset for 3D Head Alignment from a Single Image (CVPR 2022).
MPViT - [CVPR 2022] MPViT:Multi-Path Vision Transformer for Dense Prediction
RefineMask - RefineMask: Towards High-Quality Instance Segmentation with Fine-Grained Features (CVPR 2021)
FQ-ViT - [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
RethinkVSRAlignment - (NIPS 2022) Rethinking Alignment in Video Super-Resolution Transformers
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
deepvision - PyTorch and TensorFlow/Keras image models with automatic weight conversions and equal API/implementations - Vision Transformer (ViT), ResNetV2, EfficientNetV2, NeRF, SegFormer, MixTransformer, (planned...) DeepLabV3+, ConvNeXtV2, YOLO, etc.
Restormer - [CVPR 2022--Oral] Restormer: Efficient Transformer for High-Resolution Image Restoration. SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.
PyTorch-Vision-Transformer-ViT-MNIST-CIFAR10 - Simplified Pytorch implementation of Vision Transformer (ViT) for small datasets like MNIST, FashionMNIST, SVHN and CIFAR10.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
Pretrained-Language-Model - Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.