artistic-videos
dream-textures
artistic-videos | dream-textures | |
---|---|---|
6 | 72 | |
1,746 | 7,620 | |
- | - | |
0.0 | 5.8 | |
about 6 years ago | 24 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
artistic-videos
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
It's been figured out back in the GAN days, then applied to disco diffusion, and then finally stable warp diffusion, although locked behind a patreon paywall. There are also extensions for A1111 Webui like this temporal kit but it's mostly based on ebsynth and doesn't do true temporal warping that I have in mind with these other links.
-
bi-directional img2img , is this possible to implement?
What you are thinking of is called "temporal coherence", and it was used all the way back in 2016 to create videos with neural style transfer. Example: https://github.com/manuelruder/artistic-videos
- [D] What are some cool projects for generating art?
-
old school work
Mostly using this repo: https://github.com/manuelruder/artistic-videos
-
Developing an after effects plugin for deep dreaming. Here are some first renders. Took 20 mins to render 790 frames (each time). But I didn't found any way to control the Optical Flow (Check Comments)
You should definitely have the option to toggle optical flow on/off in your plugin if this is what it looks like with it off. I've come across it before while using this old beauty, but I'm guessing that is the old and messy version you mentioned further up the thread.
-
Can someone explain how? I know its style transfer with optical flow but don't know about the tools to create something like this. 🤯
There are a lot of works in video style transfer (ex: https://github.com/manuelruder/artistic-videos, https://github.com/manuelruder/fast-artistic-videos, https://github.com/sunshineatnoon/LinearStyleTransfer) but with any of this you wont achive such quality out of the box. Video above is a commercial product with a lot of tricks hidden inside it, witch only creators are aware of.
dream-textures
- Donut done with Artificial Intelligence and Blender
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
- Using AI for 3d Game art
-
ControlNet fully integrated with Blender using nodes!
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Using DALL-E 2 to create brick and water textures in Unity.
- 3D animation attempt using Sketchup screenshots and ControlNet
- Blender 3.5
-
Master AI Texture Projection for Blender 3
Dream AI latest release: https://github.com/carson-katri/dream-textures/releases
What are some alternatives?
StyleGAN-nada
stable-diffusion-webui - Stable Diffusion web UI
neural-style-pt - PyTorch implementation of neural style transfer algorithm
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
flownet2-pytorch - Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
TemporalKit - An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
DeepDreamAnimV2 - Code is still under development
DeepBump - Normal & height maps generation from single pictures
After-Diffusion - A CEP Extension for Adobe After Effects that allows for seamless integration of the Stable Diffusion Web-UI.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]