I'm currently beginning to learn OpenGL at school, and I've started making a simple game the other day (on my own, not for school). I'm using freeglut, and am building it in C, so for my game loop I had really just been using a function I made passed to glutIdleFunc
to update all the drawing and physics in one pass. This was fine for simple animations that I didn't care too much about the frame rate, but since the game is mostly physics based, I really want to (need to) tie down how fast it's updating.
So my first attempt was to have my function I pass to glutIdleFunc
(myIdle()
) to keep track of how much time has passed since the previous call to it, and update the physics (and currently graphics) every so many milliseconds. I used timeGetTime()
to do this (by using
). And this got me to thinking, is using the idle function really a good way of going about the game loop?
My question is, what is a better way to implement the game loop in OpenGL? Should I avoid using the idle function?
Answer
The simple answer is no, you do not want to use the glutIdleFunc callback in a game that has some sort of simulation. The reason for this is that this function divorces animation and draw code from window event handling but not asynchronously. In other words, receiving and handing window events stalls draw code (or whatever you put in this callback), this is perfectly fine for an interactive application (interact, then response), but not for a game where the physics or game state must progress independent of interaction or render time.
You want to completely decouple input handling, game state, and draw code. There is an easy and clean solution to this that does not involve the graphics library directly (i.e. it's portable and easy to visualize); you want the entire game loop to produce time and have the simulation consume the produced time (in chunks). The key however is to then integrate the amount of time your simulation consumed into your animation.
The best explanation and tutorial I have found on this is Glenn Fiedler's Fix Your Timestep
This tutorial has the full treatement, however if you do not have an actual physics simulation, you can skip the true integration but the basic loop still boils down to (in verbose pseudo-code):
// The amount of time we want to simulate each step, in milliseconds
// (written as implicit frame-rate)
timeDelta = 1000/30
timeAccumulator = 0
while ( game should run )
{
timeSimulatedThisIteration = 0
startTime = currentTime()
while ( timeAccumulator >= timeDelta )
{
stepGameState( timeDelta )
timeAccumulator -= timeDelta
timeSimulatedThisIteration += timeDelta
}
stepAnimation( timeSimulatedThisIteration )
renderFrame() // OpenGL frame drawing code goes here
handleUserInput()
timeAccumulator += currentTime() - startTime
}
By doing it this way, stalls in your render code, input handling, or operating system do not cause your game state to fall behind. This method is also portable and graphics library independent.
GLUT is a fine library however it is strictly event-driven. You register callbacks and fire off the main loop. You always hand over control of your main loop using GLUT. There are hacks to get around it, you can also fake an external loop using timers and such, but another library is probably a better (easier) way to go. There are many alternatives, here are a few (ones with good documentation and quick tutorials):
- GLFW which gives you the ability to get input events inline (in your own main loop).
- SDL, however its emphasis is not specifically OpenGL.
No comments:
Post a Comment