Steve posted a video of what we’ve been working on recently =)
We were heavily inspired by this awesome Josh Hobson’s talk and the first test of this technology was based on pretty much a straightforward implementation of what they did on God of War =)
If you google what ” X3676: typed UAV loads are only allowed for single-component 32-bit element types” is, the first links refer to it as to the most annoying dx11 compute error. I’d been lucky and avoided hitting it before but nothing, including luck, lasts forever, so now I kind of see why the search results are like that. Seems that buffers with manual packing/unpacking is the only alternative that works with all formats (typeless r32 and r11g11b10 apparently don’t get along well).
Favorite talks from Siggraph 2019:
How to implement tons of real-time shadowing lights in your engine (we do some very basic stuff mentioned in this talk — caching and atlasing).
The first ever Rockstar paper I’ve ever read — an amazing talk on volumetrics in their engine.
A collection of Siggraph links by Stephen Hill.
This GBuffer normals encoding deserves more attention than it has:
I did some quick tests and it looks compact, relatively cheap in term of encoding and decoding and (tada!) it’s blendable! 10:10 is not much, of course, but I didn’t notice severe artefacts. Not everything is great — hardware blending support is awesome, for sure, but apparently decals will have to use alpha blending, simply because other blend modes will break the unpacking (we can’t normalize XY correctly). Plus, we need to read the basis bits when rendering decals and I don’t know another way of doing that, except of copying that basis info into a separate texture.
A really good thread on F+ vs deferred. As a guy who’s never worked with deferred rendering professionally (I used it for my pet project) I’m kind of tempted to try it out myself even though I might get disappointed. Deferred (tiled deferred) = “free” buffers for SSAO, SSR; deferred decals. F+ — for me it seems like an easier path to implement even though Angelo Pesce’s opinion is different.
We’re getting more and more decent game or rendering game engines which can be used for free.
Google release a rendering engine, filament — reportedly very well documented:
Xenko has become a community project under permissive MIT license (and it contains an implementation of the Bowyer-Watson tetrahedralization algorithm, I was looking for one to toy with probe based lighting).
I’m dumb. Have a view-projection matrix and points, which are behind the camera. Transform to clip space — and getting positive z values. Dumbly staring at this trying to figure out what’s wrong. And then realize that w values are basically view-space depth and w is negative for those points. Divide negative z by that negative w — voila, we got positive clip space z!
Curious whether it’s a driver bug or I just missed something. There are two draw calls. The first one is using a depth-read-only view for stencil testing and reading from the same depth texture in the shader. The second one is writing to that depth buffer. Symptoms look as if the driver doesn’t synchronize those two draw calls, so, basically, we can start writing depth while the first draw call is still reading it. As a solution I set the null depth buffer after the first draw call and then restore it — but not sure if it’s what I really had to do.
Some great talks:
Two awesome talks on precomputed lighting in Frostbite (a source of inspiration for our in-house lightmapping tech!)
Real-Time Raytracing for interaction GI workflows in Frostbite
Rendering and Antialiasing in Detroit: Become Human
Terrain rendering in Far Cry 5 (I’m not very familiar with the state of art in this area so it was a super interesting presentation to read)
The asset build system of Far Cry 5