Steve posted a video of what we’ve been working on recently =)
We were heavily inspired by this awesome Josh Hobson’s talk and the first test of this technology was based on pretty much a straightforward implementation of what they did on God of War =)
If you google what ” X3676: typed UAV loads are only allowed for single-component 32-bit element types” is, the first links refer to it as to the most annoying dx11 compute error. I’d been lucky and avoided hitting it before but nothing, including luck, lasts forever, so now I kind of see why the search results are like that. Seems that buffers with manual packing/unpacking is the only alternative that works with all formats (typeless r32 and r11g11b10 apparently don’t get along well).
Favorite talks from Siggraph 2019:
How to implement tons of real-time shadowing lights in your engine (we do some very basic stuff mentioned in this talk — caching and atlasing).
The first ever Rockstar paper I’ve ever read — an amazing talk on volumetrics in their engine.
A collection of Siggraph links by Stephen Hill.
A reminder for myself — write descriptive and clear commit messages. Just today I had to fix a bug related to a one-year old changelist, which comment said, basically, “fixed the bug”. 5 files and, I don’t know, 100 lines of not-so-obvious changes. What kind of bug? How was it fixed? Anything? The author doesn’t remember this code.
Will be saving my favorite presentation from GDC ’19:
Spider Man postmortem
Scene description in God of War
This GBuffer normals encoding deserves more attention than it has:
I did some quick tests and it looks compact, relatively cheap in term of encoding and decoding and (tada!) it’s blendable! 10:10 is not much, of course, but I didn’t notice severe artefacts. Not everything is great — hardware blending support is awesome, for sure, but apparently decals will have to use alpha blending, simply because other blend modes will break the unpacking (we can’t normalize XY correctly). Plus, we need to read the basis bits when rendering decals and I don’t know another way of doing that, except of copying that basis info into a separate texture.
Today I learned.
I kind of knew that usually (always?) there’s no actual pow instruction, at least in HLSL assembly, and that pow is unwrapped as:
pow(a, b) = exp(b * log(a))
However, it shouldn’t work in case a is 0, should it? Because log(0) is NaN? Or something else? And after that I realized that I didn’t know what log(0) would result in. So, the answer is — log(0) is -inf. -inf multiplied by any positive number is still -inf. And, the last but not least! exp(-inf) gives us 0! Kind of amazing if you ask me.
Finally cleared a window to an eye-bleeding green color and submitted the first version of my new pet project, “hikari” or 光, to github. The next steps (a lot of them): pipeline state objects, which assume some basic resource loading and caching (because shaders), meshes loading. Then some basic render graph implementation. At the pace I’m currently working just those simple tasks will take forever =(
PhysX is becoming open-source and this is a big deal. Physics subsystem is maybe the most complicated part of an engine (in terms of math at least) and it’s really nice that Bullet gets a competitor.