After looking into optimization a bit, I have discovered (literally everywhere) that it seems to be a universally recognized sin to optimize a game too early.
I really don't understand this, would it not be incredibly difficult to change some of the core structures of the game at the end, rather than developing them the first time with performance in mind?
I get that waiting until the game is finished will tell you if you even need optimizations, but shouldn't you do it anyway, after all, it could widen the variety of devices the game could run on, which would increase the number of potential players.
Could someone explain to me why it's such a bad idea to optimize too early?
Answer
Preamble:
A few objections have been raised in the comments, and I think they largely stem from a misunderstanding of what we mean when we say "premature optimization" - so I wanted to add a little clarification on that.
"Don't optimize prematurely" does not mean "write code you know is bad, because Knuth says you're not allowed to clean it up until the end"
It means "don't sacrifice time & legibility for optimization until you know what parts of your program actually need help being faster." Since a typical program spends most of its time in a few bottlenecks, investing in optimizing "everything" might not get you the same speed boost as focusing that same investment on just the bottlenecked code.
This means, when in doubt, we should:
Prefer code that's simple to write, clear to understand, and easy to modify for starters
Check whether further optimization is needed (usually by profiling the running program, though one comment below notes doing mathematical analysis - the only risk there is you also need to check that your math is right)
A premature optimization is not:
Architectural decisions to structure your code in a way that will scale to your needs - choosing appropriate modules / responsibilities / interfaces / communication systems in a considered way.
Simple efficiencies that don't take extra time or make your code harder to read. Things like using strong typing can be both efficient and make your intent more clear. Caching a reference instead of searching for it repeatedly is another example (as long as your case doesn't demand complex cache-invalidation logic - maybe hold off on writing that until you've profiled the simple way first).
Using the right algorithm for the job. A* is more optimal and more complex than exhaustively searching a pathfinding graph. It's also an industry standard. Repeating the theme, sticking to tried-and-true methods like this can actually make your code easier to understand than if you do something simple but counter to known best practices. If you have experience running into bottlenecks implementing game feature X one way on a previous project, you don't need to hit the same bottleneck again on this project to know it's real - you can and should re-use solutions that have worked for past games.
All those types of optimizations are well-justified and would generally not be labelled "premature" (unless you're going down a rabbit hole implementing cutting-edge pathfinding for your 8x8 chessboard map...)
So now with that cleared up, on to why we might find this policy useful in games specifically:
In gamedev especially, iteration speed is the most precious thing. We'll often implement and re-implement far more ideas than will ultimately ship with the finished game, trying to "find the fun."
If you can prototype a mechanic in a straightforward & maybe a bit naive way and be playtesting it the next day, you're in a much better position than if you spent a week making the most optimal version of it first. Especially if it turns out to suck and you end up throwing out that feature. Doing it the simple way so you can test early can save a ton of wasted work optimizing code you don't keep.
Non-optimized code is also generally easier to modify and try different variants on than code that's finely-tuned to do one precise thing optimally, which tends to be brittle and harder to modify without breaking it, introducing bugs, or slowing it way down. So keeping the code simple and easy to change is often worth a little runtime inefficiency throughout most of development (we're usually developing on machines above the target spec, so we can absorb the overhead and focus on getting the target experience first) until we've locked down what we need from the feature and can optimize the parts we now know are slow.
Yes, refactoring parts of the project late in development to optimize the slow spots can be hard. But so is refactoring repeatedly throughout development because the optimizations you made last month aren't compatible with the direction the game has evolved since then, or were fixing something that turned out not to be the real bottleneck once you got more of the features & content in.
Games are weird and experimental — it's hard to predict how a game project and its tech needs will evolve and where the performance will be tightest. In practice, we often end up worrying about the wrong things — search through the performance questions on here and you'll see a common theme emerge of devs getting distracted by stuff on paper that likely is not a problem at all.
To take a dramatic example: if your game is GPU-bound (not uncommon) then all that time spent hyper-optimizing and threading the CPU work might yield no tangible benefit at all. All those dev hours could have been spent implementing & polishing gameplay features instead, for a better player experience.
Overall, most of the time you spend working on a game will not be spent on the code that ends up being the bottleneck. Especially when you're working on an existing engine, the super expensive inner loop stuff in the rendering and physics systems is largely out of your hands. At that point, your job in the gameplay scripts is to basically stay out of the engine's way - as long as you don't throw a wrench in there then you'll probably come out pretty OK for a first build.
So, apart from a bit of code hygiene and budgeting (eg. don't repeatedly search for/construct stuff if you can easily reuse it, keep your pathfinding/physics queries or GPU readbacks modest, etc), making a habit of not over-optimizing before we know where the real problems are turns out to be good for productivity - saving us from wasting time optimizing the wrong things, and keeping our code simpler and easier to tweak overall.
No comments:
Post a Comment