Seems that the question “do I need to multiply or divide by pi here?” should be considered as one of the most difficult in computer graphics. And it’s wonderful that there are some people who spent their time and wrote posts and source code on this topic.
Have to add that the whole MJP’s series on baking lighting is wonderful
Grasping grains of information about new Wolfenstein and can’t wait to see/read the next year’s Siggraph presentation on its tech. Doom was one of the first titles that implemented a Vulkan rendering path and it seems that Wolfenstein is one of the first ones that uses this API. 30K draw calls, hail Mary full of grace! Volumetrics, async compute and, I’m pretty sure, tons of other great stuff!
And while we’re waiting it’s a great moment to refresh our memory and read what Doom’s rendering looked like:
Elegant, relatively simple plus cheap and useful approach to rendering of transparent objects
Playing Order 1886 and it’s graphics is something that couldn’t be created without witchery and occult rituals =) Good reason to read some tech papers from Ready At Dawn:
And I must add one more link. This is a blog of Matt Pittineo, lead graphics programmer at RaD, and I’d say that this is an extremely useful and inspiring source of information if you want to learn something about modern graphics techniques (and the source code is very clean and easy to read):
I’ve been working with this API for quite some time and if at the beginning it seemed ugly and unfamiliar later I got used to it. Nevertheless, there are several moments that still causes pain.
- Half-pixel (half-texel) offset. This is is well known topic but every time I write a posteffect shader I have to spend some time later to fix another bug related to this D3D9 feature (I’m just stupid, probably);
- You have to use FOURCC to read the depth buffer. This is a problem because even in year 2016 you will find a user with a such “GPU-driver” combination that doesn’t support this extension.
- The same bit depth for all render targets
- We don’t have texture arrays. And it’s impossible to use such constructions in a shader:
Sharing is caring. And I really appreciate the Stingray’s team idea to explain how things work in their engine.
Rendering architecture: http://bitsquid.blogspot.ru/2017/02/stingray-renderer-walkthrough.html
And video explanations about various subsystems: https://www.youtube.com/playlist?list=PLUxuJBZBzEdxzVpoBQY9agA8JUgNkeYSV