artistic-videos
StyleGAN-nada | artistic-videos | |
---|---|---|
14 | 6 | |
1,141 | 1,746 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | about 6 years ago | |
Python | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StyleGAN-nada
-
Artists Tomorrow
Here's a paper about adding the ability to guide outputs with text a full year before Stable Diffusion was published.
-
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
-
[R][P] Gradio Web demo for StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators (SIGGRAPH 2022)
project page: https://stylegan-nada.github.io/
-
The Danny AI of your dreams
Sooo I retrained the FFHQ model to be Danny using StyleGAN-NADA via this Colab notebook.
-
I made a VFX face filter thing that might be of interest to you guys (it runs in the browser without sending anything to a server and is quite fast)
Haha thanks for trying it out :) It was actually really challenging to get it working (especially all in the browser without processing on a server). A lot of help came from stylegan-nada https://github.com/rinongal/StyleGAN-nada (and a custom lightweight model basically distilling pairs from it and ffhq).
-
[D] StyleGAN3: Overview, Tutorial, and Pre-Trained Model
As for usage on non-face images most of NVidia's pre-trained models were face based (animal, humans, and paintings). Which was the aim of releasing our WikiArt model so the community would have something that could generate a greater variety of images. However these models are still constrained to the dataset that they were trained on. So without some tricks you can't generate "novel" images (like mashups of different objects)
-
[D] What are some cool projects for generating art?
I think the directional loss concepts in https://github.com/rinongal/StyleGAN-nada have real potential for artistic work, as they can go beyond the filter and paint effects that traditional style transfer applies well, while maintaining recognisable equivalency between the resulting images.
-
[R] NVIDIA and Tel Aviv Researchers Propose ‘StyleGAN-NADA’, A Text-Driven Method That Converts a Pre-Trained AI Generator to New Domains Using Only a Textual Prompt and No Training Data
5 Min Read | Paper | Project | Code
- Arquitectura colonial Argentina (Generado por IA)
- StyleGAN-NADA: Clip-Guided Domain Adaptation of Image Generators
artistic-videos
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
It's been figured out back in the GAN days, then applied to disco diffusion, and then finally stable warp diffusion, although locked behind a patreon paywall. There are also extensions for A1111 Webui like this temporal kit but it's mostly based on ebsynth and doesn't do true temporal warping that I have in mind with these other links.
-
bi-directional img2img , is this possible to implement?
What you are thinking of is called "temporal coherence", and it was used all the way back in 2016 to create videos with neural style transfer. Example: https://github.com/manuelruder/artistic-videos
- [D] What are some cool projects for generating art?
-
old school work
Mostly using this repo: https://github.com/manuelruder/artistic-videos
-
Developing an after effects plugin for deep dreaming. Here are some first renders. Took 20 mins to render 790 frames (each time). But I didn't found any way to control the Optical Flow (Check Comments)
You should definitely have the option to toggle optical flow on/off in your plugin if this is what it looks like with it off. I've come across it before while using this old beauty, but I'm guessing that is the old and messy version you mentioned further up the thread.
-
Can someone explain how? I know its style transfer with optical flow but don't know about the tools to create something like this. 🤯
There are a lot of works in video style transfer (ex: https://github.com/manuelruder/artistic-videos, https://github.com/manuelruder/fast-artistic-videos, https://github.com/sunshineatnoon/LinearStyleTransfer) but with any of this you wont achive such quality out of the box. Video above is a commercial product with a lot of tricks hidden inside it, witch only creators are aware of.
What are some alternatives?
awesome-pretrained-stylegan3 - A collection of pretrained models for StyleGAN3
neural-style-pt - PyTorch implementation of neural style transfer algorithm
flownet2-pytorch - Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
stylegan3 - Official PyTorch implementation of StyleGAN3
TemporalKit - An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension
stylegan3-fun - Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!
DeepDreamAnimV2 - Code is still under development
deep-photo-styletransfer - Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511
After-Diffusion - A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI.
prompt-to-prompt
dream-textures - Stable Diffusion built-in to Blender