pytorch-grad-cam
tf-keras-vis
pytorch-grad-cam | tf-keras-vis | |
---|---|---|
5 | 1 | |
9,456 | 306 | |
- | - | |
5.4 | 6.9 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch-grad-cam
-
Exploring GradCam and More with FiftyOne
For the two examples we will be looking at, we will be using pytorch_grad_cam, an incredible open source package that makes working with GradCam very easy. There are excellent other tutorials to check out on the repo as well.
-
Which layers are doing image segmentation on AutoEncoders/U-NET?
https://github.com/jacobgil/pytorch-grad-cam.
-
[D] Algorithm for view prediction?
I know I would like to use grad-CAM https://github.com/jacobgil/pytorch-grad-cam
-
[P] Adapting Class Activation Maps for Object Detection and Semantic Segmentation
https://github.com/jacobgil/pytorch-grad-cam is a project that has a comprehensive collection of Pixel Attribution Methods for PyTorch (like the package name grad-cam that was the original algorithm implemented).
- [Project] Recent Class Activation Map Methods for CNNs and Vision Transformers
tf-keras-vis
-
Help implementing an attention module in a DCGAN (question in comments)
Hello! I'm trying to implement the Deep convolutional GAN of this paper: Weather GAN: Multi-Domain Weather Translation Using Generative Adversarial Networks by Xuelong Li, Kai Kou, Bin Zhao (arxiv) (architecture in image). The training data the authors used consists of images and correspondingsegmentation masks with labels 0-5 for each pixel (0 being no weather-relatedpixel). I have crudely made the segmentation module G(seg), Initialgenerator module G(init) and the Discriminator D, but I don'tunderstand how to do the attention module G(att). In the paper theymentioned they used pretained weights of VGG19, but very little else is saidabout G(att).I found this https://github.com/keisen/tf-keras-vislibrary, which might help me, as I guess I would want G(att) to extractsomething like the saliency or activation maps multiplied with the image.However, I don't know what kind of layers I should use, or what practicalitiesto use apart from the input and output. Should I transfer-learn the network withthis data, and if so, with the segmentation labels (i.e. if any label 1-5 ispresent in the attention pixel returned)? Or can I use the pre-trainedimagenet?Also, does anyone know if the layer colors in thearchitecture image mean anything, or if they are just selected randomly forvisualization? Especially I'm concerned why G(att)'s last encoderlayer (3rd layer) is colored differently from the first two. I was firstthinking that maybe it means that G(att) is a module inside G(seg),but apparently not. The three middle blocks are apparently residual blocks.
What are some alternatives?
Transformer-Explainability - [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
autokeras - AutoML library for deep learning
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
chitra - A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
livelossplot - Live training loss plot in Jupyter Notebook for Keras, PyTorch and others
Transformer-MM-Explainability - [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
explainable-cnn - 📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
easy_explain - A library that helps to explain AI models in a really quick & easy way