If you google what ” X3676: typed UAV loads are only allowed for single-component 32-bit element types” is, the first links refer to it as to the most annoying dx11 compute error. I’d been lucky and avoided hitting it before but nothing, including luck, lasts forever, so now I kind of see why the search results are like that. Seems that buffers with manual packing/unpacking is the only alternative that works with all formats (typeless r32 and r11g11b10 apparently don’t get along well).


Favorite talks from Siggraph 2019:

How to implement tons of real-time shadowing lights in your engine (we do some very basic stuff mentioned in this talk — caching and atlasing).

The first ever Rockstar paper I’ve ever read — an amazing talk on volumetrics in their engine.

A collection of Siggraph links by Stephen Hill.

This GBuffer normals encoding deserves more attention than it has:

I did some quick tests and it looks compact, relatively cheap in term of encoding and decoding and (tada!) it’s blendable! 10:10 is not much, of course, but I didn’t notice severe artefacts. Not everything is great — hardware blending support is awesome, for sure, but apparently decals will have to use alpha blending, simply because other blend modes will break the unpacking (we can’t normalize XY correctly). Plus, we need to read the basis bits when rendering decals and I don’t know another way of doing that, except of copying that basis info into a separate texture.

A really good thread on F+ vs deferred. As a guy who’s never worked with deferred rendering professionally (I used it for my pet project) I’m kind of tempted to try it out myself even though I might get disappointed. Deferred (tiled deferred) = “free” buffers for SSAO, SSR; deferred decals. F+ — for me it seems like an easier path to implement even though Angelo Pesce’s opinion is different.

We’re getting more and more decent game or rendering game engines which can be used for free.

Google release a rendering engine, filament — reportedly very well documented:


Xenko has become a community project under permissive MIT license (and it contains an implementation of the Bowyer-Watson tetrahedralization algorithm, I was looking for one to toy with probe based lighting).

I’m dumb. Have a view-projection matrix and points, which are behind the camera. Transform to clip space — and getting positive z values. Dumbly staring at this trying to figure out what’s wrong. And then realize that w values are basically view-space depth and w is negative for those points. Divide negative z by that negative w — voila, we got positive clip space z!