I heard that in an OpenGL game what we do to let the player move is not to move the camera but to move the whole world around.
For example here is an extract of this tutorial: OpenGL View matrix
In real life you're used to moving the camera to alter the view of a certain scene, in OpenGL it's the other way around. The camera in OpenGL cannot move and is defined to be located at (0,0,0) facing the negative Z direction. That means that instead of moving and rotating the camera, the world is moved and rotated around the camera to construct the appropriate view.
Why do we do that?
Answer
Why ?
Because, A camera represents a projection view.
But in case of 3D Camera (Virtual Camera), camera moves instead of the world. I have made a detailed explanation later of this answer.
Understanding Mathematically
Projection View moves around space and change its orientation. The first thing to notice is that the desired projection on the screen does not change with view direction.
For this reason, we transform other things to get the desired projection.
Understanding From http://opengl.org
To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation. where OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0, 0, 0)
Understanding From http://open.gl
Also want to share the following lines from View matrix portion of http://open.gl/transformations
To simulate a camera transformation, you actually have to transform the world with the inverse of that transformation. Example: if you want to move the camera up, you have to move the world down instead.
Understanding by perspective
In the real world, we see things in a way that is called "perspective".
Perspective refers to the concept that objects that are farther away appear to be smaller than those that are closer to you. Perspective also means that if you are sitting in the middle of a straight road, you actually see the borders of the road as two converging lines.
That’s perspective. Perspective is critical in 3D projects. Without perspective, the 3D world doesn't look real.
While this may seem natural and obvious, it's important to consider that when you create a 3D rendering on a computer you are attempting to simulate a 3D world on the computer screen, which is a 2D surface.
Imagine that behind the computer screen there is a real 3D scene of sorts, and you are watching it through the "glass" of your computer screen. Using perspective, your goal is to create code that renders what gets "projected" on this "glass" of your screen as if there was this real 3D world behind the screen. The only caveat is that this 3D world is not real…it's just a mathematical simulation of a 3D world.
So, when using 3D rendering to simulate a scene in 3D and then projecting the 3D scene onto the 2D surface of your screen, the process is called perspective projection.
Begin by envisioning intuitively what you want to achieve. If an object is closer to the viewer, the object must appear to be bigger. If the object is farther away, it must appear to be smaller. Also, if an object is traveling away from the viewer, in a straight line, you want it to converge towards the center of the screen, as it moves farther off into the distance.
Translating perspective into math
As you view the illustration in following figure , imagine that an object is positioned in your 3D scene. In the 3D world, the position of the object can be described as xW, yW, zW, referring to a 3D coordinate system with the origin in the eye-point. That’s where the object is actually positioned, in the 3D scene beyond the screen.
As the viewer watches this object on the screen, the 3D object is "projected" to a 2D position described as xP and yP, which references the 2D coordinate system of the screen (projection plane).
To put these values into a mathematical formula, I'll use a 3D coordinate system for world coordinates, where the x axis points to the right, y points up, and positive z points inside the screen. The 3D origin refers to the location of the viewer's eye. So, the glass of the screen is on a plane orthogonal (at right angles) to the z-axis, at some z that I’ll call zProj.
You can calculate the projected positions xP and yP, by dividing the world positions xW, and yW, by zW, like this:
xP = K1 * xW / zW
yP = K2 * yW / zW
K1 and K2 are constants that are derived from geometrical factors such as the aspect ratio of your projection plane (your viewport) and the "field of view" of your eye, which takes into account the degree of wide-angle vision.
You can see how this transform simulates perspective. Points near the sides of the screen get pushed toward the center as the distance from the eye (zW) increases. At the same time, points closer to the center (0,0) are much less affected by the distance from the eye and remain close to the center.
This division by z is the famous "perspective divide."
Now, consider that an object in the 3D scene is defined as a series of vertices. So, by applying this kind of transform to all vertices of geometry, you effectively ensure that the object will shrink when it's farther away from the eye point.
Other Important Cases
- In case of 3D Camera (Virtual Camera), camera moves instead of the world.
To get a better understanding of 3D cameras, imagine you are shooting a movie. You have to set up a scene that you want to shoot and you need a camera. To get the footage, you'll roam through the scene with your camera, shooting the objects in the scene from different angles and points of view.
The same filming process occurs with a 3D camera. You need a "virtual" camera, which can roam around the "virtual" scene that you have created.
Two popular shooting styles involve watching the world through a character's eyes (also known as a first person camera) or pointing the camera at a character and keeping them in view (known as a third person camera).
This is the basic premise of a 3D camera: a virtual camera that you can use to roam around a 3D scene, and render the footage from a specific point of view.
Understanding world space and view space
To code this kind of behavior, you'll render the contents of the 3D world from the camera's point of view, not just from the world coordinate system point of view, or from some other fixed point of view.
Generally speaking, a 3D scene contains a set of 3D models. The models are defined as a set of vertices and triangles, referenced to their own coordinate system. The space in which the models are defined is called the model (or local) space.
After placing the model objects into a 3D scene, you'll transform these models' vertices using a "world transform" matrix. Each object has its own world matrix that defines where the object is in the world and how it is oriented.
This new reference system is called "world space” (or global space). A simple way to manage it is by associating a world transform matrix to each object.
In order to implement the behavior of a 3D camera, you'll need to perform additional steps. You'll reference the world—not to the world origin—but to the reference system of the 3D camera itself.
A good strategy involves treating the camera as an actual 3D object in the 3D world. Like any other 3D object, you use a "world transform" matrix to place the camera at the desired position and orientation in the 3D world. This camera world transform matrix transforms the camera object from the original, looking forward rotation (along the z-axis), to the actual world (xc, yc, zc) position, and world rotation.
Following figure shows the relationships between the World (x, y, z) coordinate system and the View (camera) (x', y', z') coordinate system.
No comments:
Post a Comment