It's actually a bunch of inter-related questions:
Are compressed textures (dxt5/dxt1 etc.) ever completely decompressed while going through rendering pipeline?
If the answer to the first question is true, then how is memory managed for several large uncompressed textures?
Is framebuffer any different from VRAM in modern GPU?
Answer
GPU compressed texture formats like DXT / BC / ETC are specifically designed to be read directly from their compressed form. They don't need to be unpacked into a raw RGBA buffer.
The way this works is each block of texels (often 4x4) takes up some fixed number of bits - so we know exactly how far along in the buffer to look for a particular texel - and these blocks can be decompressed without reading all of the surrounding/preceding texture information. GPUs contain specialized hardware that decompresses just the requested texel blocks as needed to fulfill texture sampling requests from your shaders.
This is in contrast to formats like jpg & png, where the amount of space each texel takes up can vary across the image (detailed areas taking up more data, predictable areas taking up less), so to find a particular texel you have to uncompress the whole image, or at least large/distributed chunks of it. But because they can selectively compress predictable areas of a texture, they tend to compress images to smaller sizes for storage on disc or transmission over a network than what we get from GPU-friendly formats. Different strategies for different uses.
Generally, asking multiple questions should be done via multiple posts, but since this is a pretty short answer I'll hit point 3 too:
"Framebuffer" is just a particular bit of video memory that we've decided to use to store the composed image we want to present to the screen. Do note the details in JarkkoL's answer, where on some specialized hardware we might choose to locate this buffer in a particular part of our available video memory that's optimized for the bandwidth needs of render targets.
No comments:
Post a Comment