-
Do you know if these things I found offer any hope for being able to continue rendering a scene smoothly while we handle GPU memory management operations on worker threads?
https://gfx-rs.github.io/2023/11/24/arcanization.html
https://github.com/gfx-rs/wgpu/issues/5322
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
The actual issue is not CPU-side. The issue is GPU-side.
The CPU feeds commands (CommandBuffers) telling the GPU what to do over a Queue.
WebGPU/wgpu/dawn only have a single general purpose queue. Meaning any data upload commands (copyBufferToBuffer) you send on the queue block rendering commands from starting.
The solution is multiple queues. Modern GPUs have a dedicated transfer/copy queue separate from the main general purpose queue.
WebGPU/wgpu/dawn would need to add support for additional queues: https://github.com/gpuweb/gpuweb/issues?q=is%3Aopen+is%3Aiss...
There's also ReBAR/SMA, and unified memory (UMA) platforms to consider, but that gets even more complex.
-
Thank you! Also, everything explained in the article is pretty much here: https://github.com/eliasdaler/edbr
-
Great writeup! I learned Vulkan myself to write a scientific data visualization engine (https://datoviz.org/ still quite experimental, will release a newer version soon). I had some knowledge of OpenGL before and learning Vulkan was SO hard. The learning resources weren't that great 5 years ago. I took up the challenge and it was so much fun. In the process I wrote a small wrapper around Vulkan (https://datoviz.org/api/vklite/) to make it a bit less painful to work with (it supports a subset of the features, those that are the most required for scientific visualization purposes).
-
Khronos overhauled their docs last year. I've found the "Vulkan Guide" easier to read than the spec.
https://docs.vulkan.org/guide/latest/index.html
Not to be confused with the tutorial you're referring to.
https://vkguide.dev
-
Take a look at a real engine, something like vkquake is a good reference [1].
[1]: https://github.com/Novum/vkQuake
-
Indeed. vk-bootstrap is a bit better with 600 lines of code, though: https://github.com/charles-lunarg/vk-bootstrap/blob/main/exa...
Vulkan initialization and basic swapchain management is very verbose, but things get much better after you do it for the first time and make some handy abstractions around pipeline creation/management later.
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
bgfx
Cross-platform, graphics API agnostic, "Bring Your Own Engine/Framework" style rendering library.
I'm curious why webgpu is receiving so much attention. There have been many low-level cross-platform graphics abstractions over the years. The bgfx [1] project had its first commit ~12 years ago and it's still going! It's much more mature than webgpu. I'm guessing being W3C backed is what's propelling it?
[1] https://github.com/bkaradzic/bgfx
-
substrata
Metaverse client and server written in C++. Runs on Windows, Mac, Linux and Web. Custom 3D engine, networked physics and Lua scripting
I believe you can do loading texture data onto the GPU from another thread with OpenGL with pixel buffer objects: https://www.khronos.org/opengl/wiki/Pixel_Buffer_Object
I haven't tried it yet, but will try soon for my open-source metaverse Substrata: https://substrata.info/.
-
If you want OpenGL use ANGLE
https://github.com/google/angle
several phones now ship ANGLE as their only OpenGL support on top of their Vulkan drivers.
If you want a modern-ish API that's relatively easy and portable use WebGPU via wgpu (rust) or dawn (c++).
-
Whisper
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model (by Const-me)
True, and it’s not just games: https://github.com/Const-me/Whisper/issues/42
-
That doesn't invalidate what I said, rather that we could have gotten WebGL 2.0 Compute much sooner than WebGPU if it wasn't for bloody politics among browser vendors.
https://github.com/9ballsyndrome/WebGL_Compute_shader
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
https://issues.chromium.org/issues/40150444
"Intel spearheaded the webgl2-compute context to provide a way to run GPU compute workloads on the web. At the same time, the WebGPU effort at the W3C aimed to design a new, lower-level graphics API, including GPU compute. The webgl2-compute approach encountered some technical barriers, including that macOS' OpenGL implementation never supported compute shaders, meaning that it wasn't easily portable. webgl2-compute has so far been used by customers for prototyping.
At present, WebGPU is close to shipment, and its shader pipeline is nearing completion. It's possible to run combined GPU rendering and compute workloads in WebGPU.
In order to reclaim code space in Chromium's installer that is needed by WebGPU, the webgl2-compute context must be removed."