ICON
dream-textures
ICON | dream-textures | |
---|---|---|
6 | 72 | |
1,542 | 7,599 | |
- | - | |
4.1 | 5.8 | |
5 months ago | 10 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ICON
- ControlNet fully integrated with Blender using nodes!
- Is there any AI that can compile several pictures of a person into a single, 3d version?
-
[R][P] ICON: Implicit Clothed humans Obtained from Normals + Gradio Web Demo
github: https://github.com/YuliangXiu/ICON
- Show HN: Icon-3D Avatar Creator from 2D Pixels
-
Icon: Towards Large-Scale Avatar Creation from In-the-Wild Pixels
Realistic virtual humans will play a central role in mixed and augmented reality, forming a critical foundation for the Metaverse and supporting remote presence, collaboration, education, and entertainment.
To enable this, new tools are needed to easily create large-scale 3D virtual humans that can be readily animated. However, current methods need either posed 3D scans captured by expensive scanning equipment or 2D images with carefully controlled user poses. Both of them can't scale up easily.
ICON ("Implicit Clothed humans Obtained from Normals") takes a step towards robust 3D clothed human reconstruction from in-the-wild images. This also enables creating animatable avatars directly from video with personalized and natural pose-dependent cloth deformation.
Homepage: https://icon.is.tue.mpg.de/
Github: https://github.com/YuliangXiu/ICON
Google Colab: https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgt...
dream-textures
- Donut done with Artificial Intelligence and Blender
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
- Using AI for 3d Game art
-
ControlNet fully integrated with Blender using nodes!
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Using DALL-E 2 to create brick and water textures in Unity.
- 3D animation attempt using Sketchup screenshots and ControlNet
- Blender 3.5
-
Master AI Texture Projection for Blender 3
Dream AI latest release: https://github.com/carson-katri/dream-textures/releases
What are some alternatives?
lightweight-human-pose-estimation.pytorch - Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.
stable-diffusion-webui - Stable Diffusion web UI
ECON - [CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
AlphaPose - Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
text2cinemagraph - Text2Cinemagraph: Text-Guided Synthesis of Eulerian Cinemagraphs [SIGGRAPH ASIA 2023]
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
aistplusplus_api - API to support AIST++ Dataset: https://google.github.io/aistplusplus_dataset
DeepBump - Normal & height maps generation from single pictures
VIBE - Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]