

It's just that the documentation around the binding types and layouts, resource limits, and command-encoder/pipeline operational semantics are poor right now.

Trying to do some fancier compute stuff now, and overall the primitives are relatively straightforward.

I've built myself a really rudimentary perlin noise generator using compute shaders, and managed to pipe that into a rendering shader that uses two triangles to render part of the noise field onto a canvas really smoothly.

Wrapping my head around buffers and layouts, and the fact that wgsl is a huge pain in the neck to debug, took a while. I've recently been playing with WebGPU and while it's still a bit of a boilerplate nightmare (which I wouldn't presume to know how to do better). I've wanted to dabble in parallel/gpu programming for a while but the fact that all the material forced me to care about triangles and matrix transforms for nontrivial case examples turned me off. And that's the right solution.Īnecdotally, I had the opposite experience. In the same way most people on the web use Three.js (instead of WebGL), you'll see libraries like Babylon or Three.js making use of WebGPU without really having to know about all that stuff. But for novices, you'll end up using some library or engine that abstracts away a lot of the hardcore functionality like "let's get an adapter that supports this specific texture extension" or "let's reuse a binding group layout to save 2ns when binding uniforms". If you know how the metal works, you'll be able to optimize the sh*t out of it. The idea of WebGPU going forward is that it will make it easier for other libraries to use it in a more deterministic fashion. It tried being something in-betweenm though, and ended up never being the best at either providing explicit behavior, nor being friendly. WebGL was never exactly easy to read either, but you could get started more easily. This is similar to API movements seen in DirectX, Vulkan, and Metal. One of the expected advantages of WebGPU is exposing the inner workings of the hardware's GPU support in a more explicit manner. More appropriately, in this case, WebGPU is in many ways less approachable for novices. WebGPU is better in many dimensions, but also worse in others people might care about.
