txt2mask
ECCV2022-RIFE
txt2mask | ECCV2022-RIFE | |
---|---|---|
24 | 12 | |
507 | 4,076 | |
- | 1.6% | |
2.6 | 5.8 | |
over 1 year ago | 2 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
txt2mask
-
Unpromted text2mask
Honestly I'd suggest just downloading the script instead, it's much easier to use and gives you boxes for the prompts rather than having to use all this stuff --> [txt2mask background[/txt2mask] It's up to you of course, but for me the extension conflicts with my favorite extension Dynamic Prompts anyway, so it had to go :( The stand-alone script still works mint tho haha.
-
Another Instruct Pix2Pix on video experiment: "Make it a bronze sculpture"
Can pix2pix be used with the txt2mask extension to easily isolate the dancer?
-
Any random hair colour function for automatic1111?
That looks similar to Unprompted, which is the successor to txt2mask which is probably much easier to use than the others (at least after a quick glance at the documentation).
-
In-painting Mask generation via API
automatic1111's webui now has a txt2mask script for inpainting. see here. Works great.
-
Will models have to be retrained for when this feature is eventually added into SD?
Separating colours into masks then rendering one at a time would just take a plugin, not a model update. It would be like this plugin only a lot easier as the objects are coming pre-masked by the colour instead of having to run recognition on them from the description https://github.com/ThereforeGames/txt2mask
-
[Inpainting] [Q] Want to remove a person/ group of people from an image.
It's pretty awesome but the developer recently said he's wrapping it into another extension of his and will not be updating the standalone script anymore. I'm sad about that because it works differently in the new extension and I find it a lot less convenient now, altho it's possible it will be changed again so I'm not giving up hope yet lol. Here's a link to the original version if you want to try it https://github.com/ThereforeGames/txt2mask
-
InstructPix2Pix - Stable Diffusion Combined With GPT-3 to "make it so"
There's a version of this idea already in the Automatic1111 distro, a script called 'txt2mask' that's on github here: https://github.com/ThereforeGames/txt2mask
-
Is it possible to replace object in image with object from another image
txt2mask
- Stable Diffusion links from around September 17, 2022 that I collected for further processing
-
Inpainting every frame using AE + SD
Perhaps you could txt2mask (https://github.com/ThereforeGames/txt2mask) to automate - i.e. you just need to have the text "fire hydrant" as your mask.
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
What are some alternatives?
sd-dynamic-prompts - A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-prompt-inpainting - This project helps you do prompt-based inpainting without having to paint the mask - using Stable Diffusion and Clipseg
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
gif2gif - Automatic1111 Animated Image (input/output) Extension
sd-webui-controlnet - WebUI extension for ControlNet
stable-diffusion-webui - Stable Diffusion web UI
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
clipseg - This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".
VideoRenderer - RTX HDR modded into MPC-VideoRenderer.
sdcompare - A-B voting tool for images
AnimeGANv2 - [Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime