When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?
Answer
Graphics hardware can perform early depth-based culling of fragments before computing their color value (in other words, before running your fragment shader). Consequently, if you utilize any features that would affect that, such as discard
, alpha-testing, or manipulating gl_FragDepth
the hardware's ability to do that optimization will be compromised since the true depth of the fragment cannot be assumed and the full shader must be run.
Whether or not the use of any of those compromising features has a net observable performance impact depends on the situation, though. The early-z optimization can improve performance if you have very expensive fragment shaders, for example, but if the cost of your pipeline is in the vertex shader (or elsewhere) it won't benefit you as much, and consequently you may see little or no performance degradation by using discard
.
Disabling the depth test entirely via the API should prevent the optimization from running as well, since it could result in incorrectly-rendered scenes. In your case, then, it shouldn't matter that you use discard
.
Recent hardware can force the tests (including early stencil tests) using layout(early_fragment_tests)
-- there is more information (and caveats) on this on the page I linked in the beginning of the answer.
No comments:
Post a Comment