h-former VS Efficient-VDVAE

Compare h-former vs Efficient-VDVAE and see what are their differences.

h-former

H-Former is a VAE for generating in-between fonts (or combining fonts). Its encoder uses a Point net and transformer to compute a code vector of glyph. Its decoder is composed of multiple independent decoders which act on a code vector to reconstruct a point cloud representing a glpyh. (by mzguntalan)

Efficient-VDVAE

Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more" (by Rayhane-mamah)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
h-former Efficient-VDVAE
3 8
5 176
- -
0.0 0.0
almost 2 years ago over 1 year ago
Python Python
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

h-former

Posts with mentions or reviews of h-former. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-29.

Efficient-VDVAE

Posts with mentions or reviews of Efficient-VDVAE. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-13.

What are some alternatives?

When comparing h-former and Efficient-VDVAE you can also consider the following projects:

GradCache - Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint

disentangling-vae - Experiments for understanding disentanglement in VAE latent representations

pointnet - PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

transferlearning - Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习

code-representations-ml-brain - [NeurIPS 2022] "Convergent Representations of Computer Programs in Human and Artificial Neural Networks" by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, Una-May O'Reilly.

HyperGAN - Composable GAN framework with api and user interface

thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries