RobustVideoMatting
RobustVideoMatting | sd_dreambooth_extension | |
---|---|---|
16 | 115 | |
8,189 | 1,825 | |
- | - | |
0.0 | 8.7 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RobustVideoMatting
- lineart_coarse + openpose, batch img2img
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Robust Video Matting/Background Remover (Remove Background from images and videos, useful for compositing) https://github.com/PeterL1n/RobustVideoMatting (RVM - Remove backgrounds from videos) https://github.com/nadermx/backgroundremover (BackgroundRemover - works well on single images) -------VOICE GENERATION--------
-
Adobe After Effects VS Runway AI 👀
Looks like runway is packaging a bunch of AI tools like stable diffusion and other opensource tools into a paid package. The matting tools it is using looks like this tool https://github.com/PeterL1n/RobustVideoMatting which can be run off your computer for free if you can figure out the geeky side of installing this stuff. I've tried it out and it sometimes works well but most of the time the results aren't as good as the examples on their github. Still a good tool to have in the toolbox though.
-
Rotoscoping a video by comparing images
OR this separate application looks promising, if you can work out Google Collab (I couldn't unfortunately): https://github.com/PeterL1n/BackgroundMattingV2 https://github.com/PeterL1n/RobustVideoMatting
-
CatFileCreator in Nuke
I have done a bit of coding and I will use pretrained models only. Looking at things like depth and segmentation. Like this as an example. I am using it on a collab now but its so cumbersome. https://github.com/PeterL1n/RobustVideoMatting
-
[Q] Video Editing using AI
I do not know much about Machine learning, and I am not sure if I can ask question here. But if yes, I need help with either choosing best libraries to do Video Editing like Background Removal and similar. Some of the ones that I found is RVM: https://github.com/PeterL1n/RobustVideoMatting (which currently seems like the best choice)
- Is this FOSS ML software safe?
- [D] Is this ML project safe?
-
Trying to train videomatting model
First of all I would ask if somebody retrained Robust Video Matting model on own data? I am trying to, but with all the models I end up getting bad quality result as the ones attached to the post. So my data is some objects rotating on 360 and with white backgrounds, The task seems to be pretty simple as the model just has to remove white bgr and keep colorized object. I have masks on every 10th frame of my videos. The masks are 0 - bgr, 255 - fgr. I have tried Robust Video Matting model, MODNet, PaddleSeg and several segmentation models and every of them failed to show consistent results on that data. What should I do in the case?
-
Remove Background NO GREENSCREEN?
I have found a github with a project like this but it is tedious to use: https://github.com/PeterL1n/RobustVideoMatting
sd_dreambooth_extension
- SDXL Training for Auto1111 is now Working on a 24GB Card
-
(Requesting Help)
I am trying to use StableDiffusion via AUTOMATIC1111 with the Dreambooth extension
-
it will be an absolute madness when sdxl becomes standard model and we start getting other models from it
When I first attempted SD training, I was very frustrated. It wasn't until I found this obscure forum thread on Github that I actually started producing great results with Dreambooth. Because I have such satisfactory results, I'm very reluctant to beat my brains against LoRa and its related training techniques. I gave up trying to train TI embeddings a long time ago. And I never figured out how to train or how to use hypernetworks. I've only been able to get good results with Dreambooth directly because of that thread I linked above. I make LoRas by extracting them from Dreambooth-trained checkpoints. And I have no idea if I'm doing the extractions the right way or not.
-
"Exception training model: ' Some tensors share memory" with Dreambooth on Vladmatic
Getting the same with automatic1111 and sd_dreambooth extension. Check out more here in the issues log: https://github.com/d8ahazard/sd_dreambooth_extension/issues/1266
-
Yo, DreamBooth gatekeepers, SHARE YOUR HYPERPARAMETERS, please.
It's several moths old and many things have changed. But the spreadsheet available through this thread on Github has been indispensable for me when I train Dreambooth models. I'm astounded no one talks about it. I bring it up all the time. The research presented there should be continued. I'd love to see similar research done for SD v2.1.
-
What is the BEST solution for hyper realistic person training?
Training rate is paramount. Read this Github thread.
-
How do you train your LoRAs, 1 Epoch or >1 Epoch (same # of steps)?
https://github.com/d8ahazard/sd_dreambooth_extension/discussions/547/ (in depth training principles understanding)
-
Struggling to install Dreambooth
sd_dreambooth_extension https://github.com/d8ahazard/sd_dreambooth_extension.git main 926ae204 Fri Mar 31 15:12:45 2023 unknown
- Attempting to train a lora with RTX 2060 6 GB vRAM, how to go about this?
-
SD just released an open source version of their GUI called StableStudio
also the Dreambooth extension supports API (https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/scripts/api.py) so i'm not sure where do you get those news :/
What are some alternatives?
MODNet - A Trimap-Free Portrait Matting Solution in Real Time [AAAI 2022]
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
BackgroundMattingV2 - Real-Time High-Resolution Background Matting
kohya_ss
PINTO_model_zoo - A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
pytorch-deep-image-matting - Pytorch implementation of deep image matting
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
coremltools - Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
dreambooth-training-guide
keras-onnx - Convert tf.keras/Keras models to ONNX
sd-scripts