-
stable-diffusion-webui-vid2vid
Discontinued Translate a video to some AI generated stuff, extension script for AUTOMATIC1111/stable-diffusion-webui.
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
You could try Stable Diffusion. If you use A1111 webui you can use the stable-diffusion-webui-vid2vid extension to convert each frame with models and prompts of your choice. If think that if you could render depth or normal maps you could also use these as hints to ControlNets which would improve your results. The problem with converting video like this is always consistency. The individual frames may look great but there are often noticeable variations in details between them. You could search r/StableDiffusion for vid2vid to see examples of what people actually achieve.
You could try Stable Diffusion. If you use A1111 webui you can use the stable-diffusion-webui-vid2vid extension to convert each frame with models and prompts of your choice. If think that if you could render depth or normal maps you could also use these as hints to ControlNets which would improve your results. The problem with converting video like this is always consistency. The individual frames may look great but there are often noticeable variations in details between them. You could search r/StableDiffusion for vid2vid to see examples of what people actually achieve.