I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'.
This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST).
I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole.
At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain.
However...
How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them.
I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.
Answer
Trying to interpolate across the triangles and working out which triangle to use was working, but it was strangely jittery if I used the data to move a player across the surface of my heightmap.
A bit of Googling turned up this page http://www.gamesandcode.com/blog/xna-project/rolling-the-ball which shows some XNA code for rolling a ball across a heightfield.
In that the code uses bilinear interpolation to work out the height based on the whole 'quad' which is accurate enough for what I want (and now I think about it, is probably what OpenGL is doing to draw these pieces of geometry anyway).
Here is the code I managed to create
(position.x and position.y are the player's co-ords, gridsize is the width of the 'quads' in GL co-ords)
float xpos = position.x/gridSize;
float ypos = position.z/gridSize;
double intpart;
modf(xpos, &intpart);
float modX = (position.x - intpart * gridSize) / gridSize;
modf(ypos, &intpart);
float modY = (position.z - intpart * gridSize) / gridSize;
float TopLin = Lerp(GetHeightAt((int)xpos, (int)ypos),
GetHeightAt((int)xpos + 1, (int)ypos), modX);
float BotLin = Lerp(GetHeightAt((int)xpos, (int)ypos+1),
GetHeightAt((int)xpos + 1,(int) ypos+1), modX);
return Lerp(TopLin, BotLin, modY);
Lerp is a simple function that interpolates from a to b in steps of t (0 ... 1):
float Lerp (float a, float b, float t)
{
return a + t * (b - a);
}
No comments:
Post a Comment