-
stable-diffusion
Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Material-Map-Generator
Easily create AI generated Normal maps, Displacement maps, and Roughness maps.
Here it is: https://github.com/carson-katri/dream-textures/blob/main/requirements-win-torch-1-11-0.txt You need to run it with Blender’s Python not your system one. Also their Python is missing some headers used to build some dependencies, so you need to copy those in. The code to do that starts here: https://github.com/carson-katri/dream-textures/blob/17a98731d71ec1d30dce7e87cb21092302ca2801/operators/install_dependencies.py#L103
This release also includes some other great features, such as inpainting, prompt history, Image Editor integration, a Concept Art prompt preset, as well as many optimizations thanks to developments on the lstein fork of SD. You can now generate 512x512 images on a GPU with just 4GB of VRAM!
I saw you were working on adding displacement and material maps to it, you may want to look into this instead of trying to re-train the model. https://github.com/JoeyBallentine/Material-Map-Generator
Related posts
-
Donut done with Artificial Intelligence and Blender
-
Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
-
Resources for artists interesting in using StableDiffusion as a tool?
-
Using AI for 3d Game art