TokenCut
poolformer
TokenCut | poolformer | |
---|---|---|
1 | 3 | |
285 | 1,226 | |
- | 0.0% | |
1.2 | 0.0 | |
about 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TokenCut
-
[R][P] Self-supervised Transformers for Unsupervised Object Discovery using Normalized Cut + Hugging Face Spaces Gradio Demo
github: https://github.com/YangtaoWANG95/TokenCut
poolformer
-
Researchers from Sea AI Lab and National University of Singapore Introduce ‘PoolFormer’: A Derived Model from MetaFormer for Computer Vision Tasks
GitHub: https://github.com/sail-sg/poolformer
-
[D] Are Image Transformers Overhyped? "MetaFormer is all you need" explained (5-minute summary by Casual GAN Papers)
arxiv / code
-
[P] Fine-tuning the new PoolFormer (MetaFormer) model on a Kaggle Competitions Dataset
Code for https://arxiv.org/abs/2111.11418 found: https://github.com/sail-sg/poolformer
What are some alternatives?
pytorch-GAT - My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
pytorch-seq2seq - Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
SpecVQGAN - Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
D2L_Attention_Mechanisms_in_TF - This repository contains Tensorflow 2 code for Attention Mechanisms chapter of Dive into Deep Learning (D2L) book.
HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
nlp-tutorial - Natural Language Processing Tutorial for Deep Learning Researchers
TFLiteClassification - TensorFlow Lite Image Classification Python Implementation
FunMatch-Distillation - TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.
Transformer-in-Transformer - An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
ru-dalle - Generate images from texts. In Russian