I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture.
For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall.
Then the frustum is:
matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale,
-(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection);
Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop.
For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things.
I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV.
For this my mProjection is set using:
matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection);
With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on?
Here's my matrixPerspecitve code:
void matrixPerspective(float angle, float near, float far, float aspect, mat4 m)
{
//float size = near * tanf(angle / 360.0 * M_PI);
float size = near * tanf(degreesToRadians(angle) / 2.0);
float left = -size, right = size, bottom = -size / aspect, top = size / aspect;
// Unused values in perspective formula.
m[1] = m[2] = m[3] = m[4] = 0;
m[6] = m[7] = m[12] = m[13] = m[15] = 0;
// Perspective formula.
m[0] = 2 * near / (right - left);
m[5] = 2 * near / (top - bottom);
m[8] = (right + left) / (right - left);
m[9] = (top + bottom) / (top - bottom);
m[10] = -(far + near) / (far - near);
m[11] = -1;
m[14] = -(2 * far * near) / (far - near);
}
And my mView is set using:
lookAtMatrix(cameraPos, camLookAt, camUpVector, mView);
* UPDATE *
I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works.
This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet.
It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0.
After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way.
If someone gives me a better answer I'll checkmark it.
I'm answering this myself since no one else has touched it in a month.
I'm not sure why it works. If I ever get a chance to go through all the math to figure it out. But this is the code that I worked out, and it works across all FOV's. I tested it by sliding through all FOV's and placing my sprite at unityDepth, which changes for each FOV. The sprite stays at pixel for pixel the whole time with no jello or distortion.
The way I do it in this app is 'centerField' is a zDepth of what I'm focusing on. Then instead of having a huge frustum that goes from .01 to whatever, I only go a range around that. The reason for that is to add resolution steps to my depth buffer. But that's a different discussion. You could replace the near and far (centerField+350
) with whatever values you normal use. It doesn't effect this z depth unity issue.
GLfloat centerField = cameraPos.z - camLookAt.z;
GLfloat near = centerField-150;
if (near<1.0) {
near=1.0;
}
matrixPerspective(cameraFOV, near, centerField+350, (float)backingWidth / (float)backingHeight, mProjection);
matrixCopy(mProjection, pProjection);
matrixMultiply(mProjection, mView, mViewProjection);
unityDepth = mViewProjection[0]*16;
focusZ = -unityDepth;