SPADE
T2I-Adapter
SPADE | T2I-Adapter | |
---|---|---|
11 | 25 | |
7,533 | 3,158 | |
0.1% | 2.9% | |
0.0 | 7.9 | |
9 months ago | 7 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SPADE
- T2i Segmentation Colors Reference - Work in progress v18
-
I'm looking for an AI Art generator from images
GauGAN (https://github.com/NVlabs/SPADE) - This is a PyTorch implementation of the SPADE (SPatially-Adaptive (DE)normalization) algorithm, which can generate images from segmentation maps. You can use it to generate realistic images of objects, landscapes, and other scenes.
- MegaPortraits: High-Res Deepfakes Created From a Single Photo
-
Can NVIDIA Canvas be used as an API?
It is open source, if that helps. Here is a GitHub link.
- Care sunt meseriile/profesiile care, in opinia voastra, nu vor fi inlocuite de AI in urmatorii 35-50 de ani si de ce?
-
Where/How should I start?
Generating photorealistic landscapes from brush strokes: Semantic Image Synthesis with Spatially-Adaptive Normalization
-
Blursed rock formation
This look like an AI image synthesized with Nvidia's SPADE.
- Nvidia Canvas
-
Gaugan Nvidia Unity
Spade Nvidia
T2I-Adapter
-
Help me understand ControlNet vs T2I-adapter vs CoAdapter
I've found some documentation here https://github.com/TencentARC/T2I-Adapter/blob/SD/docs/coadapter.md
-
Color-Diffusion: using diffusion models to colorize black and white images
Yeah, if you have a high res image, you can get color info at super low-res and then regenerate the colors at high res with another model. (though this isn't an efficient approach at all)
https://github.com/TencentARC/T2I-Adapter
i've also seen a controlnet do this.
- Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
-
Reflected Diffusion Models
https://github.com/TencentARC/T2I-Adapter
It works with the Mikubill ControlNet plugin for A1111.
- Is it possible to replace objects with an already segmented image by ControlNet?
-
ControlNet v1.1 has been released
These are from Tencent: https://github.com/TencentARC/T2I-Adapter
-
Can someone explain some of these newer controlnet models and preprocessors? Clipvision? Color? Pidinet? Binary?
I think they're for T2I-adapter models, which can be downloaded here.
- T2I-Adapter: Text-to-Image Models with Unprecedented Control
-
How do I combine two images using AUTOMATIC1111?
Apart from Controlnet, T2I Adapter works quite well for this. https://github.com/TencentARC/T2I-Adapter
- T2IAdapter creates Coadapter(inspired by Composer)
What are some alternatives?
gaugan - Photorealistic landscape drawings using the Nvidia SPADE model
sd-webui-controlnet - WebUI extension for ControlNet
WaveFunctionCollapse - Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
awesome-NeRF - A curated list of awesome neural radiance fields papers
ControlNet - Let us control diffusion models!
Parsec-Cloud-Preparation-Tool - Launch Parsec enabled cloud computers via your own cloud provider account.
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
sketch-to-art - 🖼 Create artwork from your casual sketch with GAN and style transfer
Color-diffusion - A diffusion model to colorize black and white images
pytorch-CycleGAN-and-pix2pix - Image-to-Image Translation in PyTorch
Latent-Paint-Mesh - NVDiffrast based implementation of Latent-Paint