tensor2tensor VS compare_gan

Compare tensor2tensor vs compare_gan and see what are their differences.

tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. (by tensorflow)

compare_gan

Compare GAN code. (by google)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
tensor2tensor compare_gan
8 4
13,873 1,803
- -
6.2 0.0
11 months ago about 3 years ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tensor2tensor

Posts with mentions or reviews of tensor2tensor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-27.
  • Understand how transformers work by demystifying all the math behind them
    1 project | news.ycombinator.com | 4 Jan 2024
    PE(1, 3) = cos(1 / 10000^(2*1 / 4)) = cos(1 / 10000^.5) ≈ 1

    I also wondered if these formulae were devised with 1-based indexing in mind (though I guess for larger dimensions it doesn't make much difference), as the paper states

    > The wavelengths form a geometric progression from 2π to 10000 · 2π

    That led me to this chain of PRs - https://github.com/tensorflow/tensor2tensor/pull/177 - turns out the original code was actually quite different to that stated in the paper. I guess slight variations in how you calculate this embedding doesn't affect things too much?

  • [P] Why the Original Transformer Figure Is Wrong, And Some Other Interesting Tidbits
    2 projects | /r/MachineLearning | 27 May 2023
    The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor.
  • Why the Original Transformer LLM Figure Is Wrong, and Other Interesting Tidbits
    2 projects | news.ycombinator.com | 24 May 2023
  • What Are Transformer Models and How Do They Work?
    2 projects | news.ycombinator.com | 15 Apr 2023
    The visualisation here may be helpful.

    https://github.com/tensorflow/tensor2tensor/issues/1591

  • [P] Why I quit my lucrative job at Google to start Vectara? (neural search as a service for developers everywhere).
    2 projects | /r/MachineLearning | 17 Oct 2022
    Found relevant code at https://github.com/tensorflow/tensor2tensor + all code implementations here
  • [D] Resources for Understanding The Original Transformer Paper
    5 projects | /r/MachineLearning | 8 Sep 2021
    Code for https://arxiv.org/abs/1706.03762 found: https://github.com/tensorflow/tensor2tensor
  • Alias-Free GAN
    5 projects | news.ycombinator.com | 23 Jun 2021
    Roughly every assumption you've stated is mistaken.

    I would say that your view here, is what I thought ML would be when I got in. If I had your faith in the process still, I would be saying the same things you're saying here.

    The reason I'm saying the exact opposite, to ensure what you've said becomes the norm.

    Let's go through your points. I'll address each of them in detail.

    Think of playing pool at a bar together with your coworker. You've been on the job for some years; they just got their Github credentials, and are eager to get started.

    While you're playing pool, your friend starts trying to convince you of something you know isn't true. What do you do? You listen, chat, and keep playing pool.

    Your theory is that they'll learn on the job that what they're saying makes no sense, so, your best bet for now is to relax and keep playing pool.

    You're trying to convince me of your position. Unfortunately, based on the things that you've been saying, it indicates you haven't had a lot of experience doing what you're proposing. If you had, you'd be saying something approximate to what I'm saying now. Which of us should change their minds?

    I probably should. I spent two years trying to convince myself that none of what I was saying was true.

    That's called "gaslighting": https://en.wikipedia.org/wiki/Gaslighting

    I was reluctant to call explicit attention to that word, since I really was trying to chill with you and just talk.

    But if you're trying to understand why I was stressed, it's because I really felt that many of the papers I tried to reproduce, use for my own purposes, or integrate into my bag of tricks, were claiming things that I'd say are mistaken knowledge.

    You seem to be under the impression that, when Karras releases the contribution, that the science will be verified.

    The science doesn't get verified. Karras is already working on the next thing.

    The verification here is that the videos clearly show the results. That's nice. That gives me a target to aim for, if I wanted to try Karras' method.

    But it doesn't help me verify Karras' claims. Firstly, there's no way to know whether I've achieved "success," or something approximate to success. Maybe my model is mostly right, but there was some curious quirk during training that made it slightly different. I'm not worried about that case though.

    The real problem is that there aren't any tools when things go wrong. When I try to reproduce a paper by reading it, there's nothing to help you. Your position is "just be smarter." My position is, "I've tried. Many times."

    Either I'm very stupid, or the paper seems to be mistaken.

    That's how I end up feeling after most of the papers I tried to replicate. Many of these replication attempts lasted weeks. My longest replication lasted over a year, before I found the crucial bug preventing me from doing exactly what you want me to do. (BatchNorm layers were initialized to 0 instead of 1 in BigGan-Deep's "unofficial but closest we have" codebase: https://github.com/google/compare_gan/issues/54)

    If you haven't had this experience, you will. The only reason you're saying the things you're saying, is because you haven't spent a lot of time trying. I feel this in the core of my being; if it's mistaken, please, I'd love to know.

    Let's start with a simple example.

    https://github.com/tensorflow/tensor2tensor/pull/1883

    Now, here's a model in tensor2tensor, a pretty popular lib. I explain in the PR why this model was "catastrophically broken, from the beginning, but may have seemed like it worked."

    I would say that many, many ML papers have such an error.

    So when you're saying "reproduce the model," you mean "reproduce the errors in their code," if the code isn't available. Which it isn't here, until September. Therefore, it's not a scientific contribution until September.

    Now, from what I hear, your position seems to be that in September, the science will happen. That's true. The science may happen, because Karras.

    Most of us don't learn from Karras. Karras is impactful. But there's a whole long tail of followers that try to follow Karras' example. And those often don't release models: https://news.ycombinator.com/item?id=27127198

    The reply goes into detail about, why aren't model's released? Is it less frequent now than it was before? But my point is, in that case -- that thread -- I believe science wasn't happening. Do you agree?

    If we don't agree on that point, then I feel there's a fundamental difference of opinion. We'll have to agree to disagree, or persuade each other to change our minds. If you want more examples, I can give you many.

    My contention is that if I tried to reproduce the model in that thread -- which I did, successfully, with BigGAN-Deep in Tensorflow -- it will take me over a year. Which it did, for BigGAN-Deep.

    Your feelings are, well yes, but the paper gave you some useful ideas to use.

    I'm saying, the code didn't work at all. The model was the only thing that saved me. I reverse engineered the DeepMind public model release, including the tensorflow graph, looking for the difference between that model and the training code I was using.

    The final fix, was to change 0 to 1.

    The model worked.

    Either I am very stupid, or we're in territory where a certain scientific rigor is warranted.

    The reason that I'm speaking my mind now, here, on a Karras' release, is because most releases aren't Karras' quality. They leave parts out of the process, like Karras is doing here. Sometimes they come later. Most of the time, they don't.

    Now. As someone who has done what I've described above, for two years straight, which of my assumptions feel mistaken?

compare_gan

Posts with mentions or reviews of compare_gan. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-01-20.
  • A difficult decision to set us up for the future
    2 projects | news.ycombinator.com | 20 Jan 2023
    Sure, if you think it’s helpful. Here’s the message I sent that got me the new job. The context is that they were pushing back a little bit during salary negotiation and asking for a resume, so I channeled my Jewish ancestors and went into full salesman mode:

    https://battle.shawwn.com/Shawn%20Presser's%20Resume.pdf was the resume I used for Groq. I planned to update it after finishing out the year. In terms of my ML work, here's some highlights of my work prior to Groq:

    - A Newsweek article about various GPT work I did https://www.newsweek.com/openai-text-generator-gpt-2-video-g...

    - I was the first to demonstrate that GPT-2 could play chess https://www.theregister.com/2020/01/10/gpt2_chess/

    - ... which DeepMind referenced: https://twitter.com/theshawwn/status/1226916484938530819

    - GPT-2 music https://soundcloud.com/theshawwn/sets/ai-generated-videogame...

    - Invented swarm training https://www.docdroid.net/faDq8Bu/swarm-training-v01a.pdf

    - Built books3, the largest component of The Pile, a training dataset for language models (later used to train GPT-J): https://arxiv.org/abs/2101.00027

    - Started the first ML discord server, grew the community to >2k members (Eleuther was formed there)

    - Strongly encouraged Nat to invest in Carmack, made the initial intro

    - Reverse engineered BigGAN’s model over the course of ~6mo to locate a bug in their open source implementation https://github.com/google/compare_gan/issues/54

    ML research makes me happy, so I’ll be doing it for the foreseeable future. @StabilityAI expressed interest in bringing me on to help fix problems with their diffusion training. I’d prefer to work with you, but if it’s not possible to increase the equity or salary offer, I understand. Are you sure you can’t bump it?”

    They bumped it. Anyway, I hope that was helpful. I don’t know how relevant my recession experiences are compared to, say, someone in webdev. But if you’re a talented dev and someone’s lowballing you, be sure to at least try to negotiate. Don’t let the recession fears prevent you from turning down an initial offer.

    That said, I recognize that there are loads of people in a position where they’d be thankful to have any work at all. And I imagine I’ll be in that position soon enough — 35 is getting too close to 55 for comfort.

  • Alias-Free GAN
    5 projects | news.ycombinator.com | 23 Jun 2021
    Roughly every assumption you've stated is mistaken.

    I would say that your view here, is what I thought ML would be when I got in. If I had your faith in the process still, I would be saying the same things you're saying here.

    The reason I'm saying the exact opposite, to ensure what you've said becomes the norm.

    Let's go through your points. I'll address each of them in detail.

    Think of playing pool at a bar together with your coworker. You've been on the job for some years; they just got their Github credentials, and are eager to get started.

    While you're playing pool, your friend starts trying to convince you of something you know isn't true. What do you do? You listen, chat, and keep playing pool.

    Your theory is that they'll learn on the job that what they're saying makes no sense, so, your best bet for now is to relax and keep playing pool.

    You're trying to convince me of your position. Unfortunately, based on the things that you've been saying, it indicates you haven't had a lot of experience doing what you're proposing. If you had, you'd be saying something approximate to what I'm saying now. Which of us should change their minds?

    I probably should. I spent two years trying to convince myself that none of what I was saying was true.

    That's called "gaslighting": https://en.wikipedia.org/wiki/Gaslighting

    I was reluctant to call explicit attention to that word, since I really was trying to chill with you and just talk.

    But if you're trying to understand why I was stressed, it's because I really felt that many of the papers I tried to reproduce, use for my own purposes, or integrate into my bag of tricks, were claiming things that I'd say are mistaken knowledge.

    You seem to be under the impression that, when Karras releases the contribution, that the science will be verified.

    The science doesn't get verified. Karras is already working on the next thing.

    The verification here is that the videos clearly show the results. That's nice. That gives me a target to aim for, if I wanted to try Karras' method.

    But it doesn't help me verify Karras' claims. Firstly, there's no way to know whether I've achieved "success," or something approximate to success. Maybe my model is mostly right, but there was some curious quirk during training that made it slightly different. I'm not worried about that case though.

    The real problem is that there aren't any tools when things go wrong. When I try to reproduce a paper by reading it, there's nothing to help you. Your position is "just be smarter." My position is, "I've tried. Many times."

    Either I'm very stupid, or the paper seems to be mistaken.

    That's how I end up feeling after most of the papers I tried to replicate. Many of these replication attempts lasted weeks. My longest replication lasted over a year, before I found the crucial bug preventing me from doing exactly what you want me to do. (BatchNorm layers were initialized to 0 instead of 1 in BigGan-Deep's "unofficial but closest we have" codebase: https://github.com/google/compare_gan/issues/54)

    If you haven't had this experience, you will. The only reason you're saying the things you're saying, is because you haven't spent a lot of time trying. I feel this in the core of my being; if it's mistaken, please, I'd love to know.

    Let's start with a simple example.

    https://github.com/tensorflow/tensor2tensor/pull/1883

    Now, here's a model in tensor2tensor, a pretty popular lib. I explain in the PR why this model was "catastrophically broken, from the beginning, but may have seemed like it worked."

    I would say that many, many ML papers have such an error.

    So when you're saying "reproduce the model," you mean "reproduce the errors in their code," if the code isn't available. Which it isn't here, until September. Therefore, it's not a scientific contribution until September.

    Now, from what I hear, your position seems to be that in September, the science will happen. That's true. The science may happen, because Karras.

    Most of us don't learn from Karras. Karras is impactful. But there's a whole long tail of followers that try to follow Karras' example. And those often don't release models: https://news.ycombinator.com/item?id=27127198

    The reply goes into detail about, why aren't model's released? Is it less frequent now than it was before? But my point is, in that case -- that thread -- I believe science wasn't happening. Do you agree?

    If we don't agree on that point, then I feel there's a fundamental difference of opinion. We'll have to agree to disagree, or persuade each other to change our minds. If you want more examples, I can give you many.

    My contention is that if I tried to reproduce the model in that thread -- which I did, successfully, with BigGAN-Deep in Tensorflow -- it will take me over a year. Which it did, for BigGAN-Deep.

    Your feelings are, well yes, but the paper gave you some useful ideas to use.

    I'm saying, the code didn't work at all. The model was the only thing that saved me. I reverse engineered the DeepMind public model release, including the tensorflow graph, looking for the difference between that model and the training code I was using.

    The final fix, was to change 0 to 1.

    The model worked.

    Either I am very stupid, or we're in territory where a certain scientific rigor is warranted.

    The reason that I'm speaking my mind now, here, on a Karras' release, is because most releases aren't Karras' quality. They leave parts out of the process, like Karras is doing here. Sometimes they come later. Most of the time, they don't.

    Now. As someone who has done what I've described above, for two years straight, which of my assumptions feel mistaken?

  • DeepMind achieves SOTA image recognition with 8.7x less compute
    2 projects | news.ycombinator.com | 14 Feb 2021
    I'm surprised so many people want to see our BigGAN images. Thank you for asking :)

    You can watch the training process here: http://song.tensorfork.com:8097/#images

    It's been going on for a month and a half, but I leave it running mostly as a fishtank rather than to get to a specific objective. It's fun to load it up and look at a new random image whenever I want. Plus I like the idea of my little TPU being like "look at me! I'm doing work! Here's what I've prepared for you!" so I try to keep my little fella online all the time.

    https://i.imgur.com/0O5KZdE.png

    The model is getting quite good. I kind of forgot about it over the past few weeks. StyleGAN could never get anywhere close to this level of detail. I had to spend roughly a year tracking down a crucial bug in the implementation that prevented biggan from working very well until now: https://github.com/google/compare_gan/issues/54

    I've never seen conglomerate pictures like this used in AI training. Do you train models on these 4x4 images? What's the purpose vs a single picture at a time? Does the model know that you're feeding it 4x4 examples, or does it have to figure that out itself?

    Nah, the grid is just for convenient viewing for humans. Robots see one image at a time. (Or more specifically, a batch of images; we happen to use batch size 2 or 4, I forget, so each core sees two images at a time, and then all 8 cores broadcast their gradients to each other and average, so it's really seeing 16 or 32 images at a time.)

    I feel a bit silly plugging our community so much, but it's really true. If you like tricks like this, join the Tensorfork discord:

    https://discord.com/invite/x52Xz3y

    My theory when I set it up was that everyone has little tricks like this, but there's no central repository of knowledge / place to ask questions. But now that there are 1,200+ of us, it's become the de facto place to pop in and share random ideas and tricks.

    For what it's worth, https://thisanimedoesnotexist.ai/ was a joint collaboration of several Tensorfork discord members. :)

    If you want future updates about this specific BigGAN model, twitter is your best bet: https://twitter.com/search?q=(from%3Atheshawwn)%20biggan&src...

  • Applications of Deep Neural Networks [pdf]
    3 projects | news.ycombinator.com | 24 Jan 2021
    Sure thing! https://github.com/google/compare_gan/issues/54

    It’s not much of a writeup. It’s basically saying, hey, this is zero when it should be one.

    The results were dramatic. It went from blobs to replicating the biggan paper almost perfectly. I think we’re at a FID of 11 or so on imagenet.

    Stole a year of my life to track it down. But it was a puzzle I couldn’t put down. It haunted my dreams. I was tossing and turning like, but why won’t it work... why won’t it work...

What are some alternatives?

When comparing tensor2tensor and compare_gan you can also consider the following projects:

pytorch-seq2seq - Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.

StyleCLIP - Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch

mmaction2 - OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

Deep-Learning-Papers-Reading-Roadmap - Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!

deep-diamond - A fast Clojure Tensor & Deep Learning library

OPUS-MT-train - Training open neural machine translation models

xlnet - XLNet: fine tuning on RTX 2080 GPU - 8 GB

Seq2seq-PyTorch

alias-free-gan - Alias-Free GAN project website and code

seq2seq - Attention-based sequence to sequence learning