RobustVideoMatting
ControlNet-v1-1-nightly
RobustVideoMatting | ControlNet-v1-1-nightly | |
---|---|---|
16 | 31 | |
8,189 | 4,330 | |
- | - | |
0.0 | 8.4 | |
about 1 month ago | 6 months ago | |
Python | Python | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RobustVideoMatting
- lineart_coarse + openpose, batch img2img
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Robust Video Matting/Background Remover (Remove Background from images and videos, useful for compositing) https://github.com/PeterL1n/RobustVideoMatting (RVM - Remove backgrounds from videos) https://github.com/nadermx/backgroundremover (BackgroundRemover - works well on single images) -------VOICE GENERATION--------
-
Adobe After Effects VS Runway AI 👀
Looks like runway is packaging a bunch of AI tools like stable diffusion and other opensource tools into a paid package. The matting tools it is using looks like this tool https://github.com/PeterL1n/RobustVideoMatting which can be run off your computer for free if you can figure out the geeky side of installing this stuff. I've tried it out and it sometimes works well but most of the time the results aren't as good as the examples on their github. Still a good tool to have in the toolbox though.
-
Rotoscoping a video by comparing images
OR this separate application looks promising, if you can work out Google Collab (I couldn't unfortunately): https://github.com/PeterL1n/BackgroundMattingV2 https://github.com/PeterL1n/RobustVideoMatting
-
CatFileCreator in Nuke
I have done a bit of coding and I will use pretrained models only. Looking at things like depth and segmentation. Like this as an example. I am using it on a collab now but its so cumbersome. https://github.com/PeterL1n/RobustVideoMatting
-
[Q] Video Editing using AI
I do not know much about Machine learning, and I am not sure if I can ask question here. But if yes, I need help with either choosing best libraries to do Video Editing like Background Removal and similar. Some of the ones that I found is RVM: https://github.com/PeterL1n/RobustVideoMatting (which currently seems like the best choice)
- Is this FOSS ML software safe?
- [D] Is this ML project safe?
-
Trying to train videomatting model
First of all I would ask if somebody retrained Robust Video Matting model on own data? I am trying to, but with all the models I end up getting bad quality result as the ones attached to the post. So my data is some objects rotating on 360 and with white backgrounds, The task seems to be pretty simple as the model just has to remove white bgr and keep colorized object. I have masks on every 10th frame of my videos. The masks are 0 - bgr, 255 - fgr. I have tried Robust Video Matting model, MODNet, PaddleSeg and several segmentation models and every of them failed to show consistent results on that data. What should I do in the case?
-
Remove Background NO GREENSCREEN?
I have found a github with a project like this but it is tedious to use: https://github.com/PeterL1n/RobustVideoMatting
ControlNet-v1-1-nightly
-
Making a ControlNet inpaint for sdxl
1- https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/89
-
AI Yearbook Photos Workflow with Stable Diffusion 1.5 Automatic1111
Install ControlNet and download the models you want to use (canny/depth/openpose should be enough for this): https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
can you downgrade Controlnet?
you can find the previous version on their git and if it's a previous version of v1.1 then you probably have to search for the right branch on the new git version and download that
- Could you help me with this problem?
- Controlnet v1.1 Lineart
- Request for current ControlNet information
-
AI conceptual massing iterations within a context image with input control sketch
Stable Diffusion: https://huggingface.co/runwayml/stable-diffusion-v1-5 with ControlNet extension: https://github.com/lllyasviel/ControlNet-v1-1-nightly running on Automatic1111 web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Inpaint Anything (uses "Segment Anything") - Cool A1111 Extension not (yet) on the in App list
-
Architectural design using Stable Diffusion and ControlNet
Sure thing, after testing midjourney a bit I found out that yhe quality of images produced is best but you have zero control on over what is produced. The big breakthrough here is ControlNet which is a Stable Diffusion extension that makes you control the initial noise based on image inputs (or at least this is what i understand) more on it here: https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
Setting Removed from ControlNET - "Skip img2img processing when using img2img initial image" - why?
https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/61 it seems get removed as duplicated.
What are some alternatives?
MODNet - A Trimap-Free Portrait Matting Solution in Real Time [AAAI 2022]
sd-webui-controlnet - WebUI extension for ControlNet
BackgroundMattingV2 - Real-Time High-Resolution Background Matting
ControlNet - Let us control diffusion models!
PINTO_model_zoo - A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
sd-webui-reactor - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
pytorch-deep-image-matting - Pytorch implementation of deep image matting
ControlNet-v1-1-nightly-colab - controlnet v1.1 colab
coremltools - Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
keras-onnx - Convert tf.keras/Keras models to ONNX
sd-webui-inpaint-anything - Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.