vision_transformer_tf
gpt-mini
vision_transformer_tf | gpt-mini | |
---|---|---|
4 | 1 | |
24 | 13 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vision_transformer_tf
-
Implemented Vision Transformers from scratch using TensorFlow 2. x 🚀, Finetuning and Converting to TF-Lite ✅
Hi r/learnmachinelearning, I am done implementing the paper AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE, popularly known as the Vision Transformer paper. Using my implementation any vision transformer model can be finetuned pretty easily with any custom dataset, Converting weights to TensorFlow Lite is also supported. My codebase is also very straightforward to understand and debug. One can learn how the vision transformer works internally by debugging the whole pipeline. Link to the GitHub Project: https://github.com/TheTensorDude/vision_transformer_tf
-
[P] Finetune any Vision Transformer architecture on your custom data 🚀, Convert to TensorFlow Lite ✅
The GitHub link to the project can be found here.
-
[P] Implemented Vision Transformers 🚀 from scratch using TensorFlow 2.x
My implementation: GitHub Link
-
Implemented Vision Transformers 🚀 from scratch using TensorFlow 2.x
My implementation: https://github.com/TheTensorDude/vision_transformer_tf
gpt-mini
What are some alternatives?
maxvit - [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
Transformer-in-Transformer - An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
SpectralEmbeddings - spectralembeddings is a python library which is used to generate node embeddings from Knowledge graphs using GCN kernels and Graph Autoencoders. Variations include VanillaGCN,ChebyshevGCN and Spline GCN along with SDNe based Graph Autoencoder.
saliency - Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).
alpha-zero-general - A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
TFLiteClassification - TensorFlow Lite Image Classification Python Implementation
minGPT-TF - A minimal TF2 re-implementation of the OpenAI GPT training
Fast-Transformer - An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in TensorFlow
D2L_Attention_Mechanisms_in_TF - This repository contains Tensorflow 2 code for Attention Mechanisms chapter of Dive into Deep Learning (D2L) book.