Is it possible for me to approximate a depth map from a generated image and make a 3D model?

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • Mergify - Updating dependencies is time-consuming.
  • InfluxDB - Collect and Analyze Billions of Data Points in Real Time
  • Sonar - Write Clean Python Code. Always.
  • stable-dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.

    I haven't tried Stable-DreamFusion, but it might be able to take an input image along with a prompt?

  • prolificdreamer

    ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation

    Personally, I'm waiting for code to drop for this one: https://github.com/thu-ml/prolificdreamer

  • Mergify

    Updating dependencies is time-consuming.. Solutions like Dependabot or Renovate update but don't merge dependencies. You need to do it manually while it could be fully automated! Add a Merge Queue to your workflow and stop caring about PR management & merging. Try Mergify for free.

  • stable-diffusion-webui-depthmap-script

    High Resolution Depth Maps for Stable Diffusion WebUI

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts