I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing.
These are the methods I've though about so far:
First of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. So I can't always tell which one is closer to camera.
Then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause a problem, in which two triangle pass through each other. Again no one can tell which one is nearer.
The next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which object should I store pixel depth?
I then thought maybe I can only store the most front object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about two semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. Note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above.
Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.
Answer
Short Answer
Look into depth peeling. From my research it seems to be the best alternative, although computationally expensive because it requires multiple rendering passes. Here's another more recent and faster implementation, also by NVIDIA.
Long Answer
That is a tough question. Most books I've read skim over the subject and leave it at:
Start by rendering all of the opaque objects and then blend the transparent objects on top of them in back-to-front order.
Easier said than done though, because the obvious approach of sorting objects by their centroids does not guarantee the correct sort order.
It's exactly the same problem why the painter's algorithm does not work for the general case and a depth buffer is needed.
With that said, one of the books I have mentions a few solutions:
Depth Peeling - a multi-pass solution which overcomes the depth buffer limitation by giving us the nth nearest fragments, not just the closest one. The biggest advantage is that you can render the transparent objects in any order, and there's no need to sort. It can be expensive because of the multiple passes but the link I gave at the top seems to improve performance.
Stencil Routed K-Buffer - Use stencil routing to capture multiples layers of fragments per pixel per geometry pass. The main disadvantage is that fragments need to be sorted in a post-processing pass.
It also mentions an hardware solution to the problem, but I don't think it's actually available:
- F-Buffer - A Rasterization-Order FIFO Buffer for Multi-Pass Rendering. Nonetheless a good read and the introduction also talks a bit about the transparency sort order problem and the current solutions.
Other workarounds that don't provide perfect results but are better than nothing:
- After rendering all opaque objects, keep using Z-buffer testing for transparent objects but disable Z-buffer writing. You might get some artifacts from incorrect sorting but at least all transparent objects will be visible.
And quoting the F-buffer whitepaper above:
The simplest solution is to render each partially transparent polygon completely independently (i.e. render all of its passes before proceeding to the next polygon). This solution is usually prohibitively expensive due to the state-change cost which is incurred. Alternatively, the application or shading library can group polygons to ensure that only non-overlapping polygons are rendered together. In most cases this solution is not attractive either, because it requires the software to perform screen-space analysis of polygons.
No comments:
Post a Comment