Our great sponsors
-
https://github.com/mseitzer/pytorch-fid for example this here. The code is quite clean and clear
-
One irritating flaw with FID is that scores are massively biased by the number of samples, that is, the fewer samples you use, the larger the score. So to make comparisons fair it's absolutely crucial to use the same number of samples. From what I've seen on standard benchmarks it's pretty common now to compute Inception features for every single data point, but only for 50k samples from generative models (for reference off the top of my head StyleGAN2-ADA does this, see Appendix A).
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
In our work on generating human action sequences (https://github.com/skelemoa/mugl), we found FID to be poorly correlated with generation quality. But the community persists with the measure for some unknown reason. We found variants of Minimum Mean Discrepancy to be much better. This is for sequential data, though.
Related posts
- [D] A better way to compute the Fréchet Inception Distance (FID)
- [D] Are there any good FID and KID metrics implementations existing that are compatible with pytorch?
- SSIM (Structural Similarity Index Metric )
- some of my portraits of people that dont exist
- Ask HN: What is the state of the art in AI photo enhancement?