Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Efficient-AI-Backbones Alternatives
Similar projects and alternatives to Efficient-AI-Backbones
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
deepvision
PyTorch and TensorFlow/Keras image models with automatic weight conversions and equal API/implementations - Vision Transformer (ViT), ResNetV2, EfficientNetV2, NeRF, SegFormer, MixTransformer, (planned...) DeepLabV3+, ConvNeXtV2, YOLO, etc.
-
PyTorch-Vision-Transformer-ViT-MNIST-CIFAR10
Simplified Pytorch implementation of Vision Transformer (ViT) for small datasets like MNIST, FashionMNIST, SVHN and CIFAR10.
-
Pretrained-Language-Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
MLclf
mini-imagenet and tiny-imagent dataset transformation for traditional classification task and also for the format for few-shot learning / meta-learning tasks
Efficient-AI-Backbones reviews and mentions
-
Researchers From China Introduce Vision GNN (ViG): A Graph Neural Network For Computer Vision Systems
Continue reading | Check out the paper, github
- GNN for computer vision, beating CNN & Transformer
-
GNN can also work well on computer vision
Vision GNN: An Image is Worth Graph of Nodes Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTroch code will be available at https://github.com/huawei-noah/CV-Backbones.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 26 Apr 2024
Stats
The primary programming language of Efficient-AI-Backbones is Python.
Popular Comparisons
- Efficient-AI-Backbones VS MPViT
- Efficient-AI-Backbones VS FQ-ViT
- Efficient-AI-Backbones VS transfiner
- Efficient-AI-Backbones VS RethinkVSRAlignment
- Efficient-AI-Backbones VS deepvision
- Efficient-AI-Backbones VS PyTorch-Vision-Transformer-ViT-MNIST-CIFAR10
- Efficient-AI-Backbones VS Pretrained-Language-Model
- Efficient-AI-Backbones VS EfficientFormer
- Efficient-AI-Backbones VS dytox
- Efficient-AI-Backbones VS MLclf
Sponsored