mup VS google-research

Compare mup vs google-research and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mup google-research
12 98
1,186 32,915
3.4% 1.1%
2.7 9.6
7 days ago 5 days ago
Jupyter Notebook Jupyter Notebook
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mup

Posts with mentions or reviews of mup. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-13.
  • Announcing xAI July 12th 2023
    3 projects | /r/xdotai | 13 Jul 2023
    Our team is led by Elon Musk, CEO of Tesla and SpaceX. We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, and μTransfer. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
  • Bard is getting better at logic and reasoning
    1 project | news.ycombinator.com | 7 Jun 2023
    I believe tuning hyper parameters well without a lot of waste for the largest models was only figured out by Greg Yang/Microsoft Research around 2022 (cited in GPT-4 paper):

    https://arxiv.org/abs/2203.03466

    Also part of how they predicted the loss ahead of time so well.

  • Cerebras Open Sources Seven GPT models and Introduces New Scaling Law
    3 projects | /r/mlscaling | 28 Mar 2023
    This is the first time I have seen muP applied by the third party. See Cerebras Model Zoo, where muP models have scale-invariant constant LR.
  • OpenAI’s policies hinder reproducible research on language models
    2 projects | news.ycombinator.com | 23 Mar 2023
    I guess, but its actually not simple to do that, in my experience. There’s another paper on that: https://arxiv.org/abs/2203.03466

    Why isn’t chinchilla running google AI chat or whatever then?

  • [D] Anyone else witnessing a panic inside NLP orgs of big tech companies?
    3 projects | /r/MachineLearning | 16 Mar 2023
    Well, but it isn't like this kind of research is new. Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (2022) tuned hyperparameters in 40M model, transferred it to 6.7B model, and beat OpenAI's 6.7B run. It is likely what OpenAI did is perfecting this kind of research. I note that four authors of that paper (Igor Babuschkin, Szymon Sidor, David Farhi, Jakub Pachocki) are credited for pretraining optimization & architecture at https://openai.com/contributions/gpt-4.
  • [R] Greg Yang's work on a rigorous mathematical theory for neural networks
    4 projects | /r/MachineLearning | 7 Jan 2023
    Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes: https://arxiv.org/abs/1910.12478 Tensor Programs II: Neural Tangent Kernel for Any Architecture: https://arxiv.org/abs/2006.14548 Tensor Programs III: Neural Matrix Laws: https://arxiv.org/abs/2009.10685 Tensor Programs IV: Feature Learning in Infinite-Width Neural Networks: https://proceedings.mlr.press/v139/yang21c.html Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer: https://arxiv.org/abs/2203.03466
  • [D] How does one choose a learning rate schedule for models that take days or weeks to train?
    2 projects | /r/MachineLearning | 15 Sep 2022
  • How to do meaningful work as an independent researcher? [Discussion]
    2 projects | /r/MachineLearning | 28 Apr 2022
  • DeepMind’s New Language Model,Chinchilla(70B Parameters),Which Outperforms GPT-3
    3 projects | news.ycombinator.com | 11 Apr 2022
    I think there remains an immense amount of such suboptimality still hanging from the tree, so to speak.

    For example, our recent paper "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer"[1] shows that even learning rate and initialization used by existing models are deeply wrong. By just picking them correctly (which involves some really beautiful mathematics), we can effectively double the model size of the GPT-3 6.7B model (to be comparable in quality to the 13B model across the suite of benchmark tasks).

    Large neural networks behave in a way we are only beginning to understand well just because each empirical probe of any such model is so much more expensive and time consuming than typical models. But principled theory here can have a lot of leverage by pointing out the right direction to look, as it did in our work.

    [1] http://arxiv.org/abs/2203.03466

  • "Training Compute-Optimal Large Language Models", Hoffmann et al 2022 {DeepMind} (current LLMs are significantly undertrained)
    1 project | /r/mlscaling | 31 Mar 2022
    On the hyperparameter front there seems to be some overlap with the recent hyperparameter transfer paper, which I get the impression Microsoft is going to try to scale, and which was referenced (and so is known) by the authors of this DeepMind paper. Which is to say, there's a good chance we'll be seeing models of this size trained with more optimal hyperparameters pretty soon.

google-research

Posts with mentions or reviews of google-research. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-10.
  • Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
    11 projects | news.ycombinator.com | 10 Apr 2024
    People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.

    "Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.

    I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.

    I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)

    And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:

      [nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
  • Google Research website is down
    1 project | news.ycombinator.com | 5 Apr 2024
  • Jpegli: A New JPEG Coding Library
    9 projects | news.ycombinator.com | 3 Apr 2024
    The change was literally just made: https://github.com/google-research/google-research/commit/4a...

    It appears this was in response to Hacker News comments.

  • Multi-bitrate JPEG compression perceptual evaluation dataset 2023
    1 project | news.ycombinator.com | 31 Jan 2024
  • Vector Databases: A Technical Primer [pdf]
    7 projects | news.ycombinator.com | 12 Jan 2024
    There are options such as Google's ScaNN that may let you go farther before needing to consider specialized databases.

    https://github.com/google-research/google-research/blob/mast...

  • Labs.Google
    1 project | news.ycombinator.com | 22 Dec 2023
    I feel it was unnecesary to create this because https://research.google/ already exists? It just seems like they want to take another URL with a "pure" domain name instead of psubdirectories, etc parts.
  • Smerf: Streamable Memory Efficient Radiance Fields
    3 projects | news.ycombinator.com | 13 Dec 2023
    https://github.com/google-research/google-research/blob/mast...
  • Shisa 7B: a new JA/EN bilingual model based on Mistral 7B
    2 projects | /r/LocalLLaMA | 7 Dec 2023
    You could also try some dedicated translation models like https://huggingface.co/facebook/nllb-moe-54b (or https://github.com/google-research/google-research/tree/master/madlad_400 for something smaller) and see how they do.
  • Translate to and from 400+ languages locally with MADLAD-400
    1 project | /r/LocalLLaMA | 10 Nov 2023
    Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.
  • Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
    2 projects | dev.to | 8 Oct 2023

What are some alternatives?

When comparing mup and google-research you can also consider the following projects:

com.openai.unity - A Non-Official OpenAI Rest Client for Unity (UPM)

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

NTK4A - Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"

fast-soft-sort - Fast Differentiable Sorting and Ranking

gpt-3 - GPT-3: Language Models are Few-Shot Learners

faiss - A library for efficient similarity search and clustering of dense vectors.

GP4A - Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes"

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

cdx-index-client - A command-line tool for using CommonCrawl Index API at http://index.commoncrawl.org/

Milvus - A cloud-native vector database, storage for next generation AI applications

nn - 🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

struct2depth - Models and examples built with TensorFlow