AI Seamless Texture Generator Built-In to Blender

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • stable-diffusion

    Discontinued This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI] (by lstein)

    Oh, it generates from a text prompt, not a sample texture. I thought this was just a tool to generate wrapped textures from non-wrapped ones.

    The licensing is a mess. The Blender plug-in is GPL 3, the stable diffusion code is MIT, and the weights for the model have a very restrictive custom license.[1] Whether the weights, which are program-generated, are copyrightable is a serious legal question.

    [1] https://github.com/lstein/stable-diffusion/blob/61f46cac31b5...

  • dream-textures

    Stable Diffusion built-in to Blender

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • CLIP

    CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

    https://github.com/openai/CLIP

    You need CLIP to have CLIP guided diffusion. So the current situation seems to trace back to OpenAI and the MIT-licensed code they released the day DALL-E was announced. I would love to be corrected if I've misunderstood the situation.

  • stable_diffusion.openvino

    You can run it on an Intel CPU if that helps: https://github.com/bes-dev/stable_diffusion.openvino

  • CLIP-Mesh

    Official implementation of CLIP-Mesh: Generating textured meshes from text using pretrained image-text models

  • stable-diffusion-webui

    Stable Diffusion web UI

    I'm currently trying to put 1000x wallpaper seamless textures into UE5 Marketplace. I'm saddened to see this news.^^. Well, fuck money anyway right? Here's a tip, you can produce all you need if you follow this guide:

    https://rentry.org/voldy#-guide-

    Just check what this stuff can do:

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...

    This is the best page on the internet right now. The hottest stuff. Better than Bitcoin.

    You can get guidance and copy business ideas here:

  • Pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

    From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :

    > GPGPU stands for General-purpose computing on graphics processing units.

    - "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488

    - Blender re: removal of OpenCL support in 2021 :

    > The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).

    - https://gitlab.com/illwieckz/i-love-compute

    - https://github.com/vosen/ZLUDA

    - https://github.com/RadeonOpenCompute/clang-ocl

    AMD ROCm: https://en.wikipedia.org/wiki/ROCm

    AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs:

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

  • From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :

    > GPGPU stands for General-purpose computing on graphics processing units.

    - "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488

    - Blender re: removal of OpenCL support in 2021 :

    > The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).

    - https://gitlab.com/illwieckz/i-love-compute

    - https://github.com/vosen/ZLUDA

    - https://github.com/RadeonOpenCompute/clang-ocl

    AMD ROCm: https://en.wikipedia.org/wiki/ROCm

    AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs:

  • ZLUDA

    CUDA on AMD GPUs

    From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :

    > GPGPU stands for General-purpose computing on graphics processing units.

    - "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488

    - Blender re: removal of OpenCL support in 2021 :

    > The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).

    - https://gitlab.com/illwieckz/i-love-compute

    - https://github.com/vosen/ZLUDA

    - https://github.com/RadeonOpenCompute/clang-ocl

    AMD ROCm: https://en.wikipedia.org/wiki/ROCm

    AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs:

  • clang-ocl

    OpenCL compilation with clang compiler.

    From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :

    > GPGPU stands for General-purpose computing on graphics processing units.

    - "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488

    - Blender re: removal of OpenCL support in 2021 :

    > The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).

    - https://gitlab.com/illwieckz/i-love-compute

    - https://github.com/vosen/ZLUDA

    - https://github.com/RadeonOpenCompute/clang-ocl

    AMD ROCm: https://en.wikipedia.org/wiki/ROCm

    AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs:

  • ROCm_Documentation

    Discontinued Legacy ROCm Software Platform Documentation

    https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...

    RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation

    ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :

    > hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.

    ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :

    > GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify

    ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:

    > HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:

    > - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.

    > - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.

    > - HIP allows developers to use the "best" development environment and tools on each target platform.

    > - The [HIPIFY] tools automatically convert source from CUDA to HIP.

    > - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*

  • HIPIFY

    Discontinued HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY] (by ROCm-Developer-Tools)

    https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...

    RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation

    ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :

    > hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.

    ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :

    > GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify

    ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:

    > HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:

    > - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.

    > - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.

    > - HIP allows developers to use the "best" development environment and tools on each target platform.

    > - The [HIPIFY] tools automatically convert source from CUDA to HIP.

    > - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*

  • gpufort

    GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify

    https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...

    RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation

    ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :

    > hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.

    ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :

    > GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify

    ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:

    > HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:

    > - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.

    > - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.

    > - HIP allows developers to use the "best" development environment and tools on each target platform.

    > - The [HIPIFY] tools automatically convert source from CUDA to HIP.

    > - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*

  • HIP

    HIP: C++ Heterogeneous-Compute Interface for Portability

    https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...

    RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation

    ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :

    > hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.

    ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :

    > GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify

    ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:

    > HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:

    > - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.

    > - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.

    > - HIP allows developers to use the "best" development environment and tools on each target platform.

    > - The [HIPIFY] tools automatically convert source from CUDA to HIP.

    > - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts