For a detailed and excellent description of what deferred shading is, check out this paper.
(Note that I’ve used the terms “deferred rendering” and “deferred shading” interchangeably in this article. That’s because I think of it as deferred rendering, but wikipedia and some articles call it deferred shading, so the parts written when I was thinking about what I’ve learnt have “deferred rendering”, while the parts where I was actually focusing on what I was writing have “deferred shading”)
In brief, game engines today have two types of renderers: the classis forward renderer, and the deferred renderer (note that the latter is rarely exclusively used, but more on that later). The forward renderer renders one pass of each light in the scene for every object lit by that light, which translates to a worst-case scenario of m*n
passes for a scene composed of m lit-objects and n lights. Deferred rendering, on the other hand, renders lighting for all objects affected by one light at once in one single pass. However, there are is one additional pass required for each object in order to store the depth-buffer and normals, so this translates to a worst-case scenario of m+n passes. The trade-off is that this information has to be stored (as far as I know) in the video memory of the GPU. Also, transparency is impossible with purely deferred shading (although I’ve read that a technique called depth peeling can be used to achieve this, still have to read about that), as the information that is used for this single-pass is the depth information of the screen (i.e. how far every pixel onscreen actually is from the viewport), so if there is a glass sheet in front the viewport and a mesh behind the sheet, the depth information for the parts of the mesh covered by the sheet will be lost. Because of this, most engines that I’ve heard of tend to use a hybrid sort of renderer, with most of the rendering taken care of by the deferred renderer, and transparency handled in a separate pass by the forward renderer.
(I’ve also read that anti-aliasing is hard on DirectX 9 hardware with deferred shading, dunno why that is yet)
So, I’ve been interested in deferred shading ever since Raven software mentioned in an article that they decided to rewrite the id Tech 4 engine with a deferred renderer solely because they wanted lots and lots of lights in their scenes. They seemed pretty enthusiastic about the technique back then, and that somewhat piqued my interest so I read up all I could understand about deferred rendering, tried out this excellent tutorial about deferred rendering in XNA, and that was it.
So, once I started flirting with graphics programming again, it seemed logical that deferred rendering be the first thing I turn to. I’ve also been dying to see if I can implement Crytek’s Light Propagation Volumes technique for approximating global illumination. It occurred to me that instead of writing throwaway demos for each graphical technique I experiment with, it would be better if I had some sort of boilerplate code that I could use over and over again to try out new techniques. Then it occurred to me that I would also need some sort of scene manager and material manager to make my work easier.
And then I remembered about OGRE, which I’d previously abandoned as most of the demos ran too slow on my laptop (which is not an issue now, since I’ve upgraded), and which will be the topic of the next post.
So, to sum it all up, forward rendering - bad, deferred rendering - good, next time - OGRE.