Our great sponsors
-
LearnOpenGL
Code repository of all OpenGL chapters from the book and its accompanying website https://learnopengl.com
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
RayTracingInVulkan
Implementation of Peter Shirley's Ray Tracing In One Weekend book using Vulkan and NVIDIA's RTX extension.
https://learnopengl.com/ is the standard beginner's reference
It's better to use a compute shader than fragment because it's better parallelizable. The idea is the same as with fragment shader, you just use compute to write into a texture. I have done something similar in Vulkan, you can take a look: https://github.com/grigoryoskin/vulkan-compute-ray-tracing https://github.com/grigoryoskin/vulkan-compute-ray-tracing/blob/master/resources/shaders/source/ray-trace-compute.comp
But this is all very awkward and pointless. Pixel shaders are not designed for that, and textures are not ideal for passing such information to the GPU. Ideally, you would use OpenGL compute shaders, or any other GPU compute API, and pass your geometry in "buffers". I've written a polygon mesh raytracer like that in OpenCL once upon a time: https://github.com/jtsiomb/clray