stable-diffusion-webui-rembg
ECCV2022-RIFE
stable-diffusion-webui-rembg | ECCV2022-RIFE | |
---|---|---|
16 | 12 | |
1,075 | 4,090 | |
- | 2.0% | |
3.7 | 5.8 | |
about 1 month ago | 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-rembg
- Here's a resource I found very useful after generating characters and objects that I wanted to isolate as transparent images.
- My First Share - Turning my Students into Pixar Versions of Themselves :)
-
Comparing one-click solutions for removing backgrounds
Extension REMBG for Automatic1111
-
Best way to mask images automatically?
This tool will mask the output image of your generations: https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg
- HELP! What options/extensions are available for auto masking in A1111?
-
Create stickers of your dream with this LoRA
Remove background with your favourite image editing software (I used Photoshop) or rembg, for example
-
Darkest Dungeon v2 (Lora)
With the following prompt, we get a good image of Pikachu in the DD style which can then have the background made transparent with https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg
- Inpaint from a Sample Image to Fill the Mask
-
Full body LORA?
Also, this auto removes backgrounds
-
Tutorial: Creating a Consistent Character as a Textual Inversion Embedding
So, the only difference from the listed method is to add an extra step to preprocessing. Before captioning images, I used this extension to batch remove backgrounds. Then, I took those PNGs and used Photoshop to batch save them as JPEGs, resulting in cutout images of the subject with white backgrounds. I then proceeded as listed in that comment.
ECCV2022-RIFE
-
AI Frame interpolation Question
Check out RIFE.
-
Enhancing ControlNet-m2m Video Smoothness with Multi-Level Frame Interpolation
Using Flowframes with the RIFE model, run 2x interpolation on a folder of video frames.
-
New NVIDIA Driver with RTX Video Super Resolution is Now Available!
Personally I have mine set to use RIFE AI via TensorRT for frame interpolation(x2), if the FPS is 30 or less.
-
I just added ControlNet BATCH support in automatic1111 webui and ControlNet extension, and here's the result. Read comment to support the Pull Requests so you can use this technique as soon as possible.
Oh now that I saw this comment, I started to investigate frame interpolation techniques using AI and found this: https://github.com/megvii-research/ECCV2022-RIFE
-
How can indie devs make 2d animations quickly, or streamline the process?
Yes but you need to use a different AI first. There are multiple AI like RIFE (there are apps for it if you don't like code) that will smooth out your animation. Then you can use those frames with Novel AI to get a more organic look in the end.
-
ECCV2022-RIFE VS FluidFrames.RIFE - a user suggested alternative
2 projects | 4 Feb 2023
-
Inpainting every frame using AE + SD
To have more smooth effect you can reduce frame per second and add FILM or RIFE between frames.
-
I inserted myself into stable diffusion, not perfect but it kinda looks my face
Interpolated with https://github.com/megvii-research/ECCV2022-RIFE
-
Stable Diffusion Animation
Sure! This would be my approach (and tools) if I was smarter:
If you make the generations with some similarities and use the right interpolation, you don't need 1000 images like my video and can obtain a smooth movement.
First, generate images with some kind of visual anchor (background, an object). You can use frames generated using the previous frame as reference image, or the same seed but different prompt/parameters, or you can go wild using img2img/inpainting (btw I struggle to find an inpainting tool for Stable Diffusion: they seem to be just img2img with a mask, without contest).
Then pass the generated images to one of the most recent interpolation algorithms, like this one https://github.com/megvii-research/ECCV2022-RIFE or the one used in the replicate we are commenting on (someone posted this reference: https://github.com/google-research/frame-interpolation )
The first link reports some free and paid implementation and a Colab, so depending on how deep you want to go, you have a lot of choices.
In the end, I'd use some good app to stabilize the image if needed, to get a more "calm" look. I use Luma Fusion, but it's a paid app (cheap, one-time payment, for iOS). I'm sure there are a ton of open-source implementations.
It's an approach similar to the animation on replicate, but it allows a lot of fine-tuning and you can add new animation ideas/tools to the process.
Nothing revolutionary, but I hope it helps!
> You have generated some pretty cool designs.
Thanks! I put in a lot of work in the last weeks. The project has a mission, I wrote something, but it's not ready yet. I believe it will be with the launch of Dall-E 8 :-/
-
Help with interpolating "missing" frames from source video
You'd probably get way better results by using something like RIFE to do interpolation and recreate missing frames, instead of minterpolate. I understand though that it's more effort as you'll need to install and setup RIFE.
What are some alternatives?
cloth-segmentation - This repo contains code and a pre-trained model for clothes segmentation.
stable-diffusion-webui - Stable Diffusion web UI
rembg - Rembg is a tool to remove images background
frame-interpolation - FILM: Frame Interpolation for Large Motion, In ECCV 2022.
sd-webui-controlnet - WebUI extension for ControlNet
stable-diffusion-webui - Stable Diffusion web UI
arXiv2021-RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation [Moved to: https://github.com/hzwer/ECCV2022-RIFE]
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
VideoRenderer - RTX HDR modded into MPC-VideoRenderer.
canvas-zoom - zoom and pan functionality
txt2mask - Automatically create masks for Stable Diffusion inpainting using natural language.