automl
gpt-3
Our great sponsors
automl | gpt-3 | |
---|---|---|
7 | 39 | |
6,143 | 9,406 | |
0.5% | - | |
5.0 | 3.5 | |
18 days ago | over 3 years ago | |
Jupyter Notebook | ||
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
automl
- Slowdown / normalization on the Front Lines
- Lion, a new Optimizer from Google, provides 3-5x speedup compared to AdamW
-
How do I increase the accuracy of small objects when training an object detector?
I'm using Google Brain's EfficientDet repo to train an object detector. What hyperparameters should I choose to increase accuracy for small objects.
-
Android QR Code Detection with TensorFlow Lite
EfficientDet-D0 has comparable accuracy as YOLOv3.
-
[R] Google AI Introduces Two New Families of Neural Networks Called ‘EfficientNetV2’ and ‘CoAtNet’ For Image Recognition
Code for https://arxiv.org/abs/2104.00298 found: https://github.com/google/automl/efficientnetv2
-
Google AI Introduces Two New Families of Neural Networks Called ‘EfficientNetV2’ and ‘CoAtNet’ For Image Recognition
7 Min Read | Paper (CoAtNet) | Paper (EfficientNetV2) | Google blog | Code
-
[R] EfficientNetV2: Smaller Models and Faster Training
Abstract: This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop this family of models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv. Our experiments show that EfficientNetV2 models train much faster than state-of- the-art models while being up to 6.8x smaller. > Our training can be further sped up by progressively increasing the image size during training, but it often causes a drop in accuracy. To compensate for this accuracy drop, we propose to adaptively adjust regularization (e.g., dropout and data augmentation) as well, such that we can achieve both fast training and good accuracy. > With progressive learning, our EfficientNetV2 significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources. Code will be available at this https URL.
gpt-3
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Can’t Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 times… he was pissed 😂
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
-
[D] Dumb question: is GPT3 model open-sourced?
And from skimming their GH page, it seems it'd be costly to host as well
- ChatGPT and the Daily Question Thread, re-evaluated with GPT-4.
What are some alternatives?
simple-faster-rcnn-pytorch - A simplified implemention of Faster R-CNN that replicate performance from origin paper
dalle-mini - DALL·E Mini - Generate images from a text prompt
FLAML - A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
TFLiteClassification - TensorFlow Lite Image Classification Python Implementation
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
SipMask - SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation (ECCV2020)
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
efficientdet-pytorch - A PyTorch impl of EfficientDet faithful to the original Google impl w/ ported weights
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
mlkit - A collection of sample apps to demonstrate how to use Google's ML Kit APIs on Android and iOS
dalle-2-preview