Monday, September 17, 2018

mathematics - How to linearly transform vertices for large coordinate systems


Alright, so I'm working on a space game and well, I want it to have a lot of space! So I need a large coordinate system and for my rendering pipeline to not suffer from "vertex jittering" due to floating point rounding errors introduced in the world to view space matrix transformation.


So far I'm having some luck with a technique discusses in GPU Pro that was used for Just Cause 2. However, it's not quite there. It looks like the "world" (including the player's ship) is orbiting around my camera instead of the camera adjusting it's orientation and position to stay behind the player's ship when it rotates. It doesn't do this with the traditional World x View x Projection transformation.


Anywho, the technique basically merges the world and view space's translation matrices by translating directly to the object's offset from the camera.


The transformation is


objectScale x objectRotation x objectOffsetFromViewPosition x viewRotation x Projection


Where the offset is a translation matrix built from the object's world position - view position.


I've always built my view matrix simply using the XNA library's XMMatrixLookAtLH() function so I'm thinking I might be missing another vital aspect of the view matrix transformation. My view rotation matrix at the moment is only built from a quaternion representing the camera's orientation.


What am I missing or what is the correct way to build this transformation?



Answer




Alright, it looks like I solved my problem. The modified transformation presented in the question was correct. However, I was able to remedy the odd rotational movement by inverting the view's rotational matrix. (I realized the camera was rotating in the opposite direction than I wanted)


Final transformation order


Scale x Rotation x camOffset x InvertMatrix(viewRotation) x projection


As for jittering due to floating point errors in dealing with my coordinate system and placing objects (such as the camera), I ended up using 64 bit ints instead of 32 bit floats. Now I can be accurate down to a millimeter across a coordinate system as large our solar system. (and with more precision if needed!)


This idea was provided with the help of some GDSE users in the chat room and this page.


As for the depth buffer, this too experiences floating point issues in the form of z-fighting with large distant astral bodies. To remedy this, I render through two different passes based on their distance from the camera. Astral bodies that won't present issues for my depth buffer are ran through my standard pipeline while distant bodies are rendered using a painter's algorithm without writing to the depth buffer.


No comments:

Post a Comment

Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...