PC GPU driver updates often cite improvements made to performance of specific, recently released games. Why is this game-specific updating needed? How do the game-specific changes interact with the game code?
Answer
As someone with a few years of driver development, I see this as two separate issues.
A graphics driver is a very complicated beast. To implement everything in a optimal way would be a simply impossible task; it's a big enough hurdle just to make a driver that actually follows the specs - and the specs keep getting increasingly more complex. So, you develop your driver based on the spec, and a handful of test applications (as, in many cases, there's no real software yet).
Along comes a real game (or benchmark, or some other use case for the driver, like video decoding) that exposes a bottleneck in the driver. In you go, figuring out how to smooth things out and make that use case faster. You can easily report that game XYZ is 27.3% faster, but in reality every application that has said use case (say, dynamic texture update) gets faster.
Then there's the ugly side, real per-application optimizations, wherein the driver detects what application is being run (or which shader is being compiled) and does something non-generic. There's been plenty of publicized cases of this, where, for example, renaming 3dmark executable suddenly changes the results.
I feel these kinds of optimizations are a waste of everybody's time - you lie to your customers in the case of benchmarks, and you may change the way a shader behaves from what the developer actually wants. I recall a case where a shader was changed from a texture lookup to a in-shader calculation (which only worked for said manufacturer's hardware) which was close, but not exactly the same result, and the developer balked that this wasn't a legal optimization.
No comments:
Post a Comment