3d-photo-inpainting
cupscale
3d-photo-inpainting | cupscale | |
---|---|---|
22 | 81 | |
6,828 | 2,067 | |
0.1% | - | |
0.0 | 0.0 | |
8 months ago | over 1 year ago | |
Python | C# | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
3d-photo-inpainting
- I have an AI Generated jpg. I want to add subtle looping animation to it
-
Whats the latest and greatest in 3d img2img/txt2img?
If you are looking to create actual 3d models, the DepthMap extension does have a function to create PLY models with vertex color information, and to render clips with simple camera moves from that extracted 3d scene, including inpainting (as per the 3d-photo-inpainting paper)
-
Quick test of AI and Blender with camera projection.
The depthmap extension for A1111 has implemented the 3d-photo-inpainting code that is doing that kind of thing. That's what I used to use, first on a Colab, and then adapted for windows so I could run it locally. But it's much more convenient to do it directly from the Automatic1111 WebUI.
- Is there an extension that does this?
-
Generate multiple complex subjects on a single image all at once with a depth aware custom extension!
But things are even older than stable diffusion.
-
Coronal mass ejection of the sun. Image from r/space. Crossview ML generated
It's a slightly modified version of https://shihmengli.github.io/3D-Photo-Inpainting/
-
[R] META researchers generate realistic renders from unseen views of any human captured from a single-view RGB-D camera
Thanks! I barely did anything though, just took a deep dream'ed photo made by another artist (Daniel Ambrosi) and passed it through this: https://shihmengli.github.io/3D-Photo-Inpainting/ (github and colab at bottom). Didn't even have to come up with the camera trajectory, was one of the presets in the repo
-
Tumultuous Seas
pretty sure it's this: https://github.com/vt-vl-lab/3d-photo-inpainting
- These are the raw frames I got from Gaugan2, but I'll be posting modified versions in the comment section.
- 3D Photography Using Context-Aware Layered Depth Inpainting
cupscale
- Print Four Souls Cards at Home (Fixed Audio)
-
What about game assets that target 1080p and you want 4K fidelity?
If you want to do more, there's chaiNNer and CupScale. You need to download an AI model to use those. There are a lot of anime/cartoon models out, so pick one that you like from here. (Note: Upscaly doesn't support these custom models.)
- Help selecting software
-
Do you have Topaz AI?
I'm not 100% sure how it holds up against topaz, but I've used cupscale (a gui for ESRGAN) to upscale most of my stuff. Its free (https://github.com/n00mkrad/cupscale) and you can find a million different ESRGAN models which are focused on different kinds of images (https://upscale.wiki/wiki/Model_Database).
- Unfall mit Fahrerflucht, AI-Upscaling?
-
(For FE Awakening in Citra) How can I change robin hair portrait?
Now upscaling isn't hard to do by itself, but the setup can be difficult. As I said earlier, ERSGAN is the preferable way to do it. (https://github.com/n00mkrad/cupscale) Cupscale is my preferred tool for doing it this way. (https://www.topazlabs.com/gigapixel-ai) Gigapixel is another option that's easier for newcomers, but may not produce as good of results. They even have a free trial if you want to demo the tool.
- What workflow is best for upscaling portraits taken by phone camera or DSLR?
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
I use cupscale for upscaling things. Allows chaining models and handles video.
-
Are there any google collab scripts or other tools to upscale a bunch of images..?
For local there's cupscale and chainner
- A rustic cottage by the field [1920x1080]
What are some alternatives?
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
image-super-resolution - 🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
caire - Content aware image resize library
BoostingMonocularDepth
waifu2x - Image Super-Resolution for Anime-Style Art
sharp - High performance Node.js image processing, the fastest module to resize JPEG, PNG, WebP, AVIF and TIFF images. Uses the libvips library.
chaiNNer - A flowchart/node-based image processing GUI aimed at making chaining image processing tasks (especially upscaling done by neural networks) easy, intuitive, and customizable. [Moved to: https://github.com/chaiNNer-org/chaiNNer]