In XNA (and Direct3D in general AFAIK), rather than creating individual vertex and fragment shaders, you bundle potentially many related shaders into 'Effects'. When you come to use an effect you select a 'technique' (or iterate through all of them) and then each 'technique' has a number of 'passes'. You loop through the each pass it selects the relevant vertex and fragment shader and you draw the geometry.
What I'm curious about though is what you would need multiple passes for? I understand in the early days of 3d you only had one texture unit and then you had 2 and that often still wasn't enough if you also wanted to do environment mapping. But on modern devices, even in the mobile class, you can sample 8 or more textures and do as many lighting calculations as you'd want to in a single shader.
Can the more experienced here offer practical examples of where multi-pass rendering is still needed, or is this just over-engineering on the part of XNA/Direct3d?
Answer
Any time you're doing post-process effects on a set of captured buffers would be a reasonable time to use a multi-pass effect, i.e. toon outlining, motion blur, SSAO, bloom. The idea being that the first pass would be where you'd render everything out to the appropriate buffers, and each subsequent pass would handle each post-process effect.
Like Ranieri mentioned about the XNACC example, they're using multiple passes to render a huge number of lights. Though in that example, they're executing the same pass, just once for each light.
Keep in mind that Shader Model 2 and before can't handle having anywhere near as many instructions as Shader Model 3 or 4 in each shader, yet another reason to split an effect into multiple passes -- to support older hardware.
No comments:
Post a Comment