neural-tangents
timm-vis
neural-tangents | timm-vis | |
---|---|---|
4 | 1 | |
2,225 | 39 | |
0.6% | - | |
7.6 | 0.0 | |
2 months ago | almost 3 years ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
neural-tangents
-
Any Deep ReLU Network Is Shallow
is used to capture the power of a fully-trained deep net of infinite width.
https://openreview.net/pdf?id=rkl4aESeUH, https://github.com/google/neural-tangents
> It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width.
https://arxiv.org/abs/1711.00165
And of course, one needs to look back at SVMs applying a kernel function and separating with a line, which looks a lot like an ANN with a single hidden layer followed by a linear mapping.
https://stats.stackexchange.com/questions/238635/kernel-meth...
-
[R] Training Machine Learning Models More Efficiently with Dataset Distillation
Code for https://arxiv.org/abs/2011.00050 found: https://github.com/google/neural-tangents
-
[D] Relationship Between Kernels, Neural Networks and Gaussian Process
I saw that you asked about neural tangent kernels (NTK) in another post yesterday -- be aware that what you're referencing in the present post are "neural network gaussian processes" (NNGP), which is distinct from NTK! The README of https://github.com/google/neural-tangents should help lift confusion. (I also took the term NNGP from there.)
-
[D] neural tangent kernel
It's true! There have been dozens of papers published on this topic, some of which are listed here: https://github.com/google/neural-tangents#references
timm-vis
-
[P] - timm-vis: Visualizer for PyTorch image models
Hello, thanks for bringing these points up. Currently the methods work only with inputs with 3 channels. I have not implemented grad-cam yet. The visualization method closest to grad-cam would be a saliency map. A saliency map shows the influence of each pixel with respect to the model outputs. It calculates gradients of the input image unlike grad-cam, which computes the gradients of the last activation layer. I plan to add 1 channel input support and grad-cam support in the next few days. I encourage you to take a look at the the existing methods in the python notebook to see if anything interests you meanwhile.
What are some alternatives?
pymc-resources - PyMC educational resources
advertorch - A Toolbox for Adversarial Robustness Research
eigenlearning - codebase for "A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"
fastai - The fastai deep learning library
mango - Parallel Hyperparameter Tuning in Python
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
indaba-pracs-2022 - Notebooks for the Practicals at the Deep Learning Indaba 2022.
Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.
Bayesian-Optimization-in-FSharp - Bayesian Optimization via Gaussian Processes in F#
photoguard - Raising the Cost of Malicious AI-Powered Image Editing
hyper-nn - Easy Hypernetworks in Pytorch and Jax
google-research - Google Research