I'm currently trying to generate a really large procedural terrain in WebGL. I use a quadtree for LOD and plan to generate 1 heightmap for each quadtree node (terrain patch).
The heightmaps are generated on the GPU and with such a large terrain, 32 bits floats don't have enough precision for the smaller terrain patches. This results in terrain becoming "blocky". This article uses bicubic surfaces to smooth things out.
I thought a lot about it and the only other solution I have is to generate the smallest terrain patche on the CPU (with double precision) instead of the GPU, but continue to generate the larger patches on the GPU. But this doesn't seem like the best thing to do. Does someone have an idea for a better solution?
Also, I can't use double precision on the GPU because I'm using WebGL, which doesn't support it.
Thanks a lot :)
Answer
At the radius of the Earth, with the center at the origin, 32-bit floating point offers us about half-metre precision laterally, so that's not enough to position centimetre-level detail.
So, our first step is not working that far from the origin. I presume you're already breaking your terrain into chunks and positioning them close to the origin where the player/viewer is. The player's grand-scale position can be stored elsewhere with low precision and updated as we move from chunk to chunk, re-centering as we go. That gives us enough precision to place our vertices accurately.
Now we just need to generate those positions accurately, and we can do this by applying the same chunking logic internally too — separating where the chunk is in the grand scheme of the whole planet from the working space numbers we use for our local detail.
Let's say you're using something like a Perlin noise function in your generator. This will generally involve doing something like...
Flooring/rounding the input position to an integer lattice
Computing a relative position within that lattice
Using the integer lattice to sample pseudo-random values / gradients at the closest corner points
Using the relative position to blend between these random values
We can see we have:
a component that needs global information (where we are in the planet) at low precision (integers)
a component that needs only local information (where we are within this lattice cell) at high precision
We can separate these two early in our generation process, rather than at the last moment inside the noise function itself. Then pass the pre-separated high & low-precision values down into our noise functions to get high-precision outputs, even for chunks that are logically much too far from the origin to have such precision.
When we're ready to generate a new chunk of terrain, we calculate the global lattice coordinates of a master point in our patch (say, the bottom-left corner, or the center), and the relative offsets of each of the four corners from that master point.
Those relative offsets may still suffer some rounding, depending on how you calculate them, but if we structure our calculation right it will be the same rounding when we calculate it from either side of the chunk seam, so that adjacent chunks will agree and show a perfectly welded join.
Now our local offsets are in a controlled range (say, tenths to hundreds, depending on the granularity of our base-level lattice unit), and we can interpolate between them to get the intermediate points with much higher precision. That then gives us high-precision position inputs to pass to our noise sampling functions.
No comments:
Post a Comment