mup
Killed by Google
mup | Killed by Google | |
---|---|---|
12 | 2,305 | |
1,186 | 2,366 | |
3.4% | - | |
2.7 | 7.0 | |
7 days ago | 5 days ago | |
Jupyter Notebook | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mup
-
Announcing xAI July 12th 2023
Our team is led by Elon Musk, CEO of Tesla and SpaceX. We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, and μTransfer. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
-
Bard is getting better at logic and reasoning
I believe tuning hyper parameters well without a lot of waste for the largest models was only figured out by Greg Yang/Microsoft Research around 2022 (cited in GPT-4 paper):
https://arxiv.org/abs/2203.03466
Also part of how they predicted the loss ahead of time so well.
-
Cerebras Open Sources Seven GPT models and Introduces New Scaling Law
This is the first time I have seen muP applied by the third party. See Cerebras Model Zoo, where muP models have scale-invariant constant LR.
-
OpenAI’s policies hinder reproducible research on language models
I guess, but its actually not simple to do that, in my experience. There’s another paper on that: https://arxiv.org/abs/2203.03466
Why isn’t chinchilla running google AI chat or whatever then?
-
[D] Anyone else witnessing a panic inside NLP orgs of big tech companies?
Well, but it isn't like this kind of research is new. Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (2022) tuned hyperparameters in 40M model, transferred it to 6.7B model, and beat OpenAI's 6.7B run. It is likely what OpenAI did is perfecting this kind of research. I note that four authors of that paper (Igor Babuschkin, Szymon Sidor, David Farhi, Jakub Pachocki) are credited for pretraining optimization & architecture at https://openai.com/contributions/gpt-4.
-
[R] Greg Yang's work on a rigorous mathematical theory for neural networks
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes: https://arxiv.org/abs/1910.12478 Tensor Programs II: Neural Tangent Kernel for Any Architecture: https://arxiv.org/abs/2006.14548 Tensor Programs III: Neural Matrix Laws: https://arxiv.org/abs/2009.10685 Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks: https://proceedings.mlr.press/v139/yang21c.html Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer: https://arxiv.org/abs/2203.03466
- [D] How does one choose a learning rate schedule for models that take days or weeks to train?
- How to do meaningful work as an independent researcher? [Discussion]
-
DeepMind’s New Language Model,Chinchilla(70B Parameters),Which Outperforms GPT-3
I think there remains an immense amount of such suboptimality still hanging from the tree, so to speak.
For example, our recent paper "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer"[1] shows that even learning rate and initialization used by existing models are deeply wrong. By just picking them correctly (which involves some really beautiful mathematics), we can effectively double the model size of the GPT-3 6.7B model (to be comparable in quality to the 13B model across the suite of benchmark tasks).
Large neural networks behave in a way we are only beginning to understand well just because each empirical probe of any such model is so much more expensive and time consuming than typical models. But principled theory here can have a lot of leverage by pointing out the right direction to look, as it did in our work.
[1] http://arxiv.org/abs/2203.03466
-
"Training Compute-Optimal Large Language Models", Hoffmann et al 2022 {DeepMind} (current LLMs are significantly undertrained)
On the hyperparameter front there seems to be some overlap with the recent hyperparameter transfer paper, which I get the impression Microsoft is going to try to scale, and which was referenced (and so is known) by the authors of this DeepMind paper. Which is to say, there's a good chance we'll be seeing models of this size trained with more optimal hyperparameters pretty soon.
Killed by Google
-
Apple Introduces M4 Chip
>Google operates in China albeit via their HK domain.
The Chinese government has access to the iCloud account of every Chinese Apple user.
>They also had project DragonFly if you remember.
Which never materialized.
>The lesser of two evils is that one company doesn’t try to actively profile me (in order for their ads business to be better) with every piece of data it can find and forces me to share all possible data with them.
Apple does targeted and non targeted advertising as well. Additionally, your carrier has likely sold all of the data they have on you. Apple was also sued for selling user data to ad networks. Odd for a Privacy First company to engage in things like that.
>Google is famously known to kill apps that are good and used by customers: https://killedbygoogle.com/
Google has been around for 26 years I believe. According to that link 60 apps were killed in that timeframe. According to your statement that Google kills an app a month that would leave you 252 apps short. Furthermore, the numbers would indicate that Google has killed 2.3 apps per year or .192 apps per month.
>As for the subpar apps: there is a massive difference between the network traffic when on the Home Screen between iOS and Android.
Not sure how that has anything to do with app quality, but if network traffic is your concern there's probably a lot more an Android user can do than an iOS user tp control or eliminate the traffic.
-
Google Fit APIs get shut down in 2025, might break fitness devices
> This is proved by countless “killed by Google” incidents..
Oh, the Google's Graveyard: https://killedbygoogle.com/
-
How I migrated from Firebase to Supabase
I was already starting to feel a little cornered in the whole Google ecosystem and a bit limited with stuff like backups, vendor lock in, etc. (and you always have the obvious hanging over your head) and ultimately, I think I just find the mental model of a SQL database more intuitive compared to a NoSQL database. So I thought to myself; "the longer I leave it, the harder it'll be to make the switch".
- With Vids, Google thinks it has the next big productivity tool for work
-
Google Axion Processors, our new Arm-based CPUs
https://killedbygoogle.com/
Their reputation is deserved. Google domains was killed only last year!
-
Google's Decision to Effectively Kill-off Small Sites
And this isn't even the first time I've been burned by Google's decisions. If you're familiar at all with the Google Graveyard, you'll know that Google has a long history of killing off products and services that people have come to rely on. This has happened to me a number of times, in both a personal and professional capacity, and frankly it's getting old.
- Google Scholar PDF Reader
-
Calls grow for Sundar Pichai to step down from Google CEO position
Just because Google has a couple of decent services that you're willing to pay for doesn't detract from the fact that most of their products have a worse life expectancy than a victorian child in the 1800s. https://killedbygoogle.com
They ruined every single opportunity to be more than an advertising company since Orkut. With scrapped attempts, starts and lack of intention for most of the 2010s to even during the early half of the Pixel Era, they seemingly haven't learnt to stick to something and iterate on it well.
And the fact that over 50% of their revenues come from search and by extension, advertising.
The fact' that til this day, they still haven't evolved from the "throwing shit at the wall then at the fan" strat which explains how they have fumbled so much so quickly.
- Google's Gemini Headaches Spur $90B Selloff
-
Our Company Is Doing So Well That You're All Fired
Yeah. The Google Graveyard really shows how far this can go.
https://killedbygoogle.com
The punchline is that in addition to hundreds of failed hobby projects, their stock is doing great. Monopoly power is a helluva drug.
What are some alternatives?
com.openai.unity - A Non-Official OpenAI Rest Client for Unity (UPM)
Materialize - Materialize, a CSS Framework based on Material Design
NTK4A - Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"
babel-plugin-superjson-next - Automatically transform your Next.js Pages to use SuperJSON
gpt-3 - GPT-3: Language Models are Few-Shot Learners
Ryujinx-Games-List - List of games & demos tested on Ryujinx
GP4A - Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes"
tModLoader - A mod to make and play Terraria mods. Supports Terraria 1.4 (and earlier) installations
cdx-index-client - A command-line tool for using CommonCrawl Index API at http://index.commoncrawl.org/
BetterJoy - Allows the Nintendo Switch Pro Controller, Joycons and SNES controller to be used with CEMU, Citra, Dolphin, Yuzu and as generic XInput
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
kotlin - The Kotlin Programming Language.