Game optimization series
Answering all of your questions about when to optimize, what to optimize, how to optimize, what to optimize and which tools to use.
- Game optimization: Introduction
- When to optimize code?
When to optimize is a topic I wanted to cover pretty late in the series but the discussion in the comments in the previous article convinced me to do it first. I've also had a chance to see some new devs being horribly worried about the speed of their game before they even had a playable alpha prototype of concept.
What they missed was that optimal code is not a goal. Your goal is to make a game that works towards your goals - responsiveness is only one of those goals and optimization is a tool that allows you to achieve that. Just don't forget that excessive optimization won't lead to better sales or reviews, and will require more effort to support the codebase: add features, fix bugs and not introduce new ones in the process.
Just to make sure we're on the same page and I don't have to constantly clarify things over the course of this article let's get some assumptions straight:
- Clean code is easier to maintain, which includes expanding it with new features and fixing bugs. This reduces the amount of time and effort required by the development in the long run, even if initially it might increase it.
- Well-written code is easier to optimize on any level.
- It is possible to write code both clean and performant. It is likely it will be slower than fully optimized code but very rarely it will be a significant drop in performance.
- A heavily optimized code is not clean.
Some of the argumentation from the article breaks down if you remove one or more of those assumptions. I think, however, that anyone who spent enough time programming will agree with those points (barring some nitpicks and edge cases.)
When is the time to optimize
In my article about priorities in game development, I talked about how any development project is limited by something. Very often it is limited by the amount of time or money you can spend. As established in the introduction to optimization, it's very easy to mindlessly work on code that was not slow in the first place. I'll discuss the specifics of locating the slow code in a future article, so let's concentrate on when.
Something is visibly slow
The absolutely most obvious situation is when you see the slowdown with your eyes. It almost always is a good time to investigate it and fix the problem. Not only this prevents bad code from propagating onward and other code depending on it, it makes the further testing easier and faster. Also, it's unlikely to fix itself, so if it's slowing now imagine how slow it will be if you add more things to the project. So when something is slow, investigate immediately.
But, like with everything, there are exceptions even to a rule as obvious as that. Slowdowns caused by extremely unlikely scenarios or something that can only occur in testing or something that will not ever occur to 99% of your player base - these usually are a waste of effort to fix.
If the slowdown is caused by a prototypical "thrown-together-just-to-test-something" code, chances are you are already planning to rewrite it into something better or remove altogether. There is no point in fixing code that you know won't be there soon. But beware of the situation where your temporary code becomes permanent, because software development has a history of this.
By the way, don't test speed in debug builds. If your game is slow only in debug mode it's possibly not a problem at all. DROD running in debug mode slows the pathfinding by somewhere between one and two orders of magnitude. It is still fast enough to develop but slow enough to be an annoyance, especially in more complex rooms. But still, that's over 10 times slower than the release build. If you do your timing tests or profiling or even just consider performance in a debug build, you're more than likely wasting your time.
I predict something will be slow
There are two aspects to this which I'll cover separately:
- what are the chances you are actually correct that something will be slow;
- slow is not an absolute value and is not necessarily a bottleneck.
Predicting how fast something will be is insanely difficult. In some cases, you can use tools like Big O notation to figure out how an algorithm will behave depending on the data you give it, but that tells you nothing about the data you'll use and how much time it really takes to execute a single iteration. And also, a single pass from a really complex shader will be O(1). A function which loads textures from the disk will be O(1). Heck, you can have a function which is O(2^N) which works in a fraction of the time of another function which is O(N). In itself Big O notation is worth nothing, it needs context.
Which brings us to the second point. You don't care about slow code, it's meaningless. You care about code that is or will be an actual bottleneck. What does it matter if your collision system is slow if your rendering is 10 times as slow and some other system is 5 times as slow as rendering... And the game is still easily achieving 60FPS on your target PC?
It doesn't matter. You only care about things that take the biggest amount of time. Anything else is either unimportant or unimportant yet, as it might be that after fixing the biggest offenders something that was on the 5th or the 10th place in terms of speed starts becoming a bottleneck. But that's when you tackle it.
But! This is an article about when to optimize, so when do you optimize code that you predict will be slow? When you prove that it's going to affect the performance. Ideally, write a simulation for the worst-case scenario and time it. If it takes more time than acceptable it's a very strong suggestion to optimize it. A better idea than immediately jumping to change it, though, can be to ensure that it's a separate module that you can easily rewrite and plug in into the system. That way you can keep hacking at the rest of the code and then if it turns out you were right, fix it quickly.
When you're writing libraries for public consumption things are very slightly different. Because slowness is so context-specific a good approach here is to write fast code from the get-go because you don't know what machines the consumer of your library will target. In this case, it's fine to spend more time than otherwise necessary on making things run smoothly but if doing so will require sacrificing code clarity and making the public API worse it's usually not a sacrifice you want to make.
Targeting low-end platforms
The only reason I can say things like the above and be right is that we're blessed with ridiculously fast machines where a lot of things just don't matter, and even if you optimize them you're more likely to waste more time yourself than save everyone playing your game. But it's different when targeting specific platform, for example making a game for an old computer/console. I have no actual experience with that so I'll tell you what I heard:
- If you can write in a higher language, like C, write in it.
- Write optimally, taking into consideration your memory and CPU speed limitations, but don't write code that you can't follow. There are many ways to make architecture faster and still quite usable.
- When something is slow rewrite that part in assembler.
If you have better resources I'll be happy to include them here.
Optimizing just in case
There is no good reason to optimize code unless you fit any of the above reasons. If you want to waste time tweaking things that won't ever matter, or want to spend weeks figuring out if pre-increment is faster than post-increment be my guest.
But if in the end your game is still not performing as well as you hoped it would? That's because you wasted time optimizing the wrong places.
Words for inexperienced people
Experienced programmers with years spent in the field still have problem precisely knowing which piece of code will be a bottleneck in the future. If you're just starting your adventure it can feel impossible and overhelming (or, the worse alternative, is that you experience full Dunning-Kruger Effect and think you know it all.) It's okay to not be confident. Accept the fact your code will be far from optimal and you'll not know what will be a problem.
Instead concentrate on writing code to the best of your abilities. In the long run this is much more valuable skill that will also make it easier for you to know how to write fast code from the get go and will help you to better identify bottlenecks ahead of time.
All in all, it's your choice when and what to optimize. I advocate as late as possible as the most reasonable default, others might go for as early as possible which might work for you better. The thing is: with the assumptions I've established at the beginning of the article those two approaches seem to be almost identical and say the same thing, to not optimize things which don't matter.
- Code optimization is a tool, not a goal, and since optimizations often reduce code clarity you want to do it as late as reasonable so as to keep the rest of the development easy.
- It's fine to optimize if something is already slow.
- It's fine to optimize if you can prove beyond reasonable doubt that a piece of code can or will be a bottleneck.
- It's fine to optimize if you are writing a library for others to consume but don't overdo it and, as in the previous point, concentrate on things provably slow.
- It's fine to optimize if you're writing for really limited hardware.
- It's not fine to optimize for the fun of it, at least not if your goal is to have a manageable and finished product.
Agree? Disagree? Want me to elaborate on something or defend a position? Just leave a comment!
- Image taken from Teepublic.com. It's a pretty cool design, don't you think?