memorization
Code for "On Memorization in Probabilistic Deep Generative Models" (by alan-turing-institute)
torch-fidelity
High-fidelity performance metrics for generative models in PyTorch (by toshas)
memorization | torch-fidelity | |
---|---|---|
1 | 3 | |
5 | 881 | |
- | - | |
10.0 | 8.1 | |
over 2 years ago | 4 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
memorization
Posts with mentions or reviews of memorization.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[D] DALL·E to be made available as API, OpenAI to give users full ownership rights to generated images
Codex is not technically copy pasting; it is generating a new output that is (almost) exactly the same, or indistinguishable on the eyes of a human, to the input. Sounds like semantics, but there is no actual copying. You already have music generating algorithms that can also generate short samples that are indistinguishable to the inputs (memorisation). Dall-E 2 is not there yet, but we are close to prompting "Original Mona Lisa painting" and be given back the original Mona Lisa painting with striking similarities. There are already several generative models of images that can mostly memorise inputs used to train it (quick example found using google: https://github.com/alan-turing-institute/memorization).
torch-fidelity
Posts with mentions or reviews of torch-fidelity.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-10.
-
[D] A better way to compute the Fréchet Inception Distance (FID)
The Fréchet Inception Distance (FID) is a widespread metric to assess the quality of the distribution of a image generative model (GAN, Stable Diffusion, etc.). The metric is not trivial to implement as one needs to compute the trace of the square root of a matrix. In all PyTorch repositories I have seen that implement the FID (https://github.com/mseitzer/pytorch-fid, https://github.com/GaParmar/clean-fid, https://github.com/toshas/torch-fidelity, ...), the authors rely on SciPy's sqrtm to compute the square root of the matrix, which is unstable and slow.
-
[D] Are there any good FID and KID metrics implementations existing that are compatible with pytorch?
Try torch-fidelity : https://github.com/toshas/torch-fidelity
-
How to compute Inception score for 3d GAN
You can, but two ingredients are required: a rich pretrained feature extractor for your data to replace InceptionV3 pretrained on ImageNet, and https://github.com/toshas/torch-fidelity, which can be used with custom feature extractors (also supports FID and KID).
What are some alternatives?
When comparing memorization and torch-fidelity you can also consider the following projects:
hydra-zen - Create powerful Hydra applications without the yaml files and boilerplate code.
evaluate - 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
wandb - 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
rexmex - A general purpose recommender metrics library for fair evaluation.