efficientnet
mup
efficientnet | mup | |
---|---|---|
9 | 12 | |
2,063 | 1,194 | |
- | 4.0% | |
0.0 | 2.7 | |
4 months ago | 16 days ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
efficientnet
-
Getting Started with Gemma Models
Examples of lightweight models include MobileNet, a computer vision model designed for mobile and embedded vision applications, EfficientDet, an object detection model, and EfficientNet, a CNN that uses compound scaling to enable better performance. All these are lightweight models from Google.
-
How did you make that?!
There was a recent paper by Facebook (2022), where they modernise a vanilla ConvNet by using the latest empirical design choices and manage to achieve state-of-the-art performance with it. This was also done before, with EffecientNet in 2019.
-
Why did the original ResNet paper not use dropout?
not true at all, plenty of sota models combines batchnorm and dropout 1. efficientnet 2. resnet rs 3. timm resnet50 (appendix)
- Increasing Model Dimensionality
- [D] How does one choose a learning rate schedule for models that take days or weeks to train?
- [D] What's the intuition behind certain CNN architectures?
-
[D] What are some interesting hidden stuff about CNNs?
Right - I think these days they do more of a balanced tradeoff between width and depth. One more recent CNN, Efficientnet, carefully chooses the width-to-depth ratio to have the best performance for a given compute budget.
-
I made an image recognition model written in NodeJs
EfficientNet a lightweight convolutional neural network architecture achieving the state-of-the-art accuracy with an order of magnitude fewer parameters and FLOPS, on both ImageNet and five other commonly used transfer learning datasets.
-
Training custom EfficientNet from scratch (greyscale)
Additionally, if you want to custom change the number of filters in the EfficientNet I would suggest using the detailed Keras implementation of the EfficientNet in this repository.
mup
-
Announcing xAI July 12th 2023
Our team is led by Elon Musk, CEO of Tesla and SpaceX. We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, and μTransfer. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
-
Bard is getting better at logic and reasoning
I believe tuning hyper parameters well without a lot of waste for the largest models was only figured out by Greg Yang/Microsoft Research around 2022 (cited in GPT-4 paper):
https://arxiv.org/abs/2203.03466
Also part of how they predicted the loss ahead of time so well.
-
Cerebras Open Sources Seven GPT models and Introduces New Scaling Law
This is the first time I have seen muP applied by the third party. See Cerebras Model Zoo, where muP models have scale-invariant constant LR.
-
OpenAI’s policies hinder reproducible research on language models
I guess, but its actually not simple to do that, in my experience. There’s another paper on that: https://arxiv.org/abs/2203.03466
Why isn’t chinchilla running google AI chat or whatever then?
-
[D] Anyone else witnessing a panic inside NLP orgs of big tech companies?
Well, but it isn't like this kind of research is new. Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (2022) tuned hyperparameters in 40M model, transferred it to 6.7B model, and beat OpenAI's 6.7B run. It is likely what OpenAI did is perfecting this kind of research. I note that four authors of that paper (Igor Babuschkin, Szymon Sidor, David Farhi, Jakub Pachocki) are credited for pretraining optimization & architecture at https://openai.com/contributions/gpt-4.
-
[R] Greg Yang's work on a rigorous mathematical theory for neural networks
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes: https://arxiv.org/abs/1910.12478 Tensor Programs II: Neural Tangent Kernel for Any Architecture: https://arxiv.org/abs/2006.14548 Tensor Programs III: Neural Matrix Laws: https://arxiv.org/abs/2009.10685 Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks: https://proceedings.mlr.press/v139/yang21c.html Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer: https://arxiv.org/abs/2203.03466
- [D] How does one choose a learning rate schedule for models that take days or weeks to train?
- How to do meaningful work as an independent researcher? [Discussion]
-
DeepMind’s New Language Model,Chinchilla(70B Parameters),Which Outperforms GPT-3
I think there remains an immense amount of such suboptimality still hanging from the tree, so to speak.
For example, our recent paper "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer"[1] shows that even learning rate and initialization used by existing models are deeply wrong. By just picking them correctly (which involves some really beautiful mathematics), we can effectively double the model size of the GPT-3 6.7B model (to be comparable in quality to the 13B model across the suite of benchmark tasks).
Large neural networks behave in a way we are only beginning to understand well just because each empirical probe of any such model is so much more expensive and time consuming than typical models. But principled theory here can have a lot of leverage by pointing out the right direction to look, as it did in our work.
[1] http://arxiv.org/abs/2203.03466
-
"Training Compute-Optimal Large Language Models", Hoffmann et al 2022 {DeepMind} (current LLMs are significantly undertrained)
On the hyperparameter front there seems to be some overlap with the recent hyperparameter transfer paper, which I get the impression Microsoft is going to try to scale, and which was referenced (and so is known) by the authors of this DeepMind paper. Which is to say, there's a good chance we'll be seeing models of this size trained with more optimal hyperparameters pretty soon.
What are some alternatives?
mmpretrain - OpenMMLab Pre-training Toolbox and Benchmark
com.openai.unity - A Non-Official OpenAI Rest Client for Unity (UPM)
segmentation_models - Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
NTK4A - Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"
label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format
gpt-3 - GPT-3: Language Models are Few-Shot Learners
models - Models and examples built with TensorFlow
GP4A - Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes"
PaddleClas - A treasure chest for visual classification and recognition powered by PaddlePaddle
cdx-index-client - A command-line tool for using CommonCrawl Index API at http://index.commoncrawl.org/
models - A collection of pre-trained, state-of-the-art models in the ONNX format
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠