stylegan3-fun
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3! (by PDillis)
StyleGAN-nada | stylegan3-fun | |
---|---|---|
14 | 5 | |
1,141 | 225 | |
- | - | |
0.0 | 4.5 | |
over 1 year ago | 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StyleGAN-nada
Posts with mentions or reviews of StyleGAN-nada.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-19.
-
Artists Tomorrow
Here's a paper about adding the ability to guide outputs with text a full year before Stable Diffusion was published.
-
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
-
[R][P] Gradio Web demo for StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators (SIGGRAPH 2022)
project page: https://stylegan-nada.github.io/
-
The Danny AI of your dreams
Sooo I retrained the FFHQ model to be Danny using StyleGAN-NADA via this Colab notebook.
-
I made a VFX face filter thing that might be of interest to you guys (it runs in the browser without sending anything to a server and is quite fast)
Haha thanks for trying it out :) It was actually really challenging to get it working (especially all in the browser without processing on a server). A lot of help came from stylegan-nada https://github.com/rinongal/StyleGAN-nada (and a custom lightweight model basically distilling pairs from it and ffhq).
-
[D] StyleGAN3: Overview, Tutorial, and Pre-Trained Model
As for usage on non-face images most of NVidia's pre-trained models were face based (animal, humans, and paintings). Which was the aim of releasing our WikiArt model so the community would have something that could generate a greater variety of images. However these models are still constrained to the dataset that they were trained on. So without some tricks you can't generate "novel" images (like mashups of different objects)
-
[D] What are some cool projects for generating art?
I think the directional loss concepts in https://github.com/rinongal/StyleGAN-nada have real potential for artistic work, as they can go beyond the filter and paint effects that traditional style transfer applies well, while maintaining recognisable equivalency between the resulting images.
-
[R] NVIDIA and Tel Aviv Researchers Propose ‘StyleGAN-NADA’, A Text-Driven Method That Converts a Pre-Trained AI Generator to New Domains Using Only a Textual Prompt and No Training Data
5 Min Read | Paper | Project | Code
- Arquitectura colonial Argentina (Generado por IA)
- StyleGAN-NADA: Clip-Guided Domain Adaptation of Image Generators
stylegan3-fun
Posts with mentions or reviews of stylegan3-fun.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-08-01.
-
Experimenting with an omnipotent paparazzi AI - VJ pack just released
I rely on the StyleGAN3-fun since it's a bit more mature and the latent walk video render tool is much better than the one within the StyleGAN2 framework. But I train using the StyleGAN2 config since it's more forgiving when using tiny datasets in the 200-1000 image range.
- stylegan3-t-256px trained on anime faces by bob80333
-
[D] StyleGAN3: Overview, Tutorial, and Pre-Trained Model
I'm not OP, but if you want to play around with SG3, I'd recommend you to use this version of it.
-
generative fluid abstract art interpolation paired with a lecture of Alan Watts and music I like
You can stabilize it with the modified code of stylegan3-fun repo to be honest, then it looks just the same way it used on stylegan2-ada
What are some alternatives?
When comparing StyleGAN-nada and stylegan3-fun you can also consider the following projects:
awesome-pretrained-stylegan3 - A collection of pretrained models for StyleGAN3
artistic-videos - Torch implementation for the paper "Artistic style transfer for videos"
stylegan3 - Official PyTorch implementation of StyleGAN3
neural-style-pt - PyTorch implementation of neural style transfer algorithm
deep-photo-styletransfer - Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511
prompt-to-prompt
GANce - Maps music and video into the latent space of StyleGAN networks.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
StyleGAN-nada vs awesome-pretrained-stylegan3
stylegan3-fun vs awesome-pretrained-stylegan3
StyleGAN-nada vs artistic-videos
stylegan3-fun vs stylegan3
StyleGAN-nada vs neural-style-pt
StyleGAN-nada vs stylegan3
StyleGAN-nada vs deep-photo-styletransfer
StyleGAN-nada vs prompt-to-prompt
StyleGAN-nada vs GANce
StyleGAN-nada vs stylegan2-pytorch