This GBuffer normals encoding deserves more attention than it has:

I did some quick tests and it looks compact, relatively cheap in term of encoding and decoding and (tada!) it’s blendable! 10:10 is not much, of course, but I didn’t notice severe artefacts. Not everything is great — hardware blending support is awesome, for sure, but apparently decals will have to use alpha blending, simply because other blend modes will break the unpacking (we can’t normalize XY correctly). Plus, we need to read the basis bits when rendering decals and I don’t know another way of doing that, except of copying that basis info into a separate texture.

Advertisements

A really good thread on F+ vs deferred. As a guy who’s never worked with deferred rendering professionally (I used it for my pet project) I’m kind of tempted to try it out myself even though I might get disappointed. Deferred (tiled deferred) = “free” buffers for SSAO, SSR; deferred decals. F+ — for me it seems like an easier path to implement even though Angelo Pesce’s opinion is different.

We’re getting more and more decent game or rendering game engines which can be used for free.

Google release a rendering engine, filament — reportedly very well documented:

https://github.com/google/filament

Xenko has become a community project under permissive MIT license (and it contains an implementation of the Bowyer-Watson tetrahedralization algorithm, I was looking for one to toy with probe based lighting).

I’m dumb. Have a view-projection matrix and points, which are behind the camera. Transform to clip space — and getting positive z values. Dumbly staring at this trying to figure out what’s wrong. And then realize that w values are basically view-space depth and w is negative for those points. Divide negative z by that negative w — voila, we got positive clip space z!

Curious whether it’s a driver bug or I just missed something. There are two draw calls. The first one is using a depth-read-only view for stencil testing and reading from the same depth texture in the shader. The second one is writing to that depth buffer. Symptoms look as if the driver doesn’t synchronize those two draw calls, so, basically, we can start writing depth while the first draw call is still reading it. As a solution I set the null depth buffer after the first draw call and then restore it — but not sure if it’s what I really had to do.

GDC 2018

Some great talks:

Two awesome talks on precomputed lighting in Frostbite (a source of inspiration for our in-house lightmapping tech!)

http://www.gdcvault.com/play/1025434/Precomputed-Global-Illumination-in

Real-Time Raytracing for interaction GI workflows in Frostbite

http://www.gdcvault.com/play/1024801/

Rendering and Antialiasing in Detroit: Become Human

http://gdcvault.com/play/1025420/Cluster-Forward-Rendering-and-Anti

Terrain rendering in Far Cry 5 (I’m not very familiar with the state of art in this area so it was a super interesting presentation to read)

http://gdcvault.com/play/1025480/Terrain-Rendering-in-Far-Cry

The asset build system of Far Cry 5

http://gdcvault.com/play/1025444/The-Asset-Build-System-of