Thursday, September 26, 2019

opengl - GL_DEPTH_COMPONENT vs GL_DEPTH_COMPONENT32


I googled like crazy and checked the OpenGL documentations but I couldn't find out how the precision of a depth buffer created using GL_DEPTH_COMPONENT is chosen.



As far as I know the precision is implementation dependent. But how will the precision be chosen? Will it always use the highest precision possible or rather the fastest one? Or is no general answer possible?


Shoud I always go with GL_DEPTH_COMPONENT32 instead?



Answer



For a basic, unsized internal format like GL_DEPTH_COMPONENT, the implementation chooses a resolution. The implementation may only used the format and type parameters to choose an internal format, but otherwise the format is up for the implementation to choose.


GL_DEPTH_COMPONENT32 is probably overkill, and as of 4.5, isn't a required format for OpenGL implementations to support (though 16 and 24 bit ones are). Consider that higher resolution depth buffers may not be as performant.


The exact requirements are documented in the OpenGL specification. On the 4.5 Compatibility spec, it is documented on section 8.5, page 230.


No comments:

Post a Comment

Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...