Dimensions of Performance

Performance is sexy.

I’ve yet to hear a marketing team tout a product’s low performance numbers. We want more of it. Faster and more powerful is the mantra occasionally escorted with familiar company such as: agile, versatile, responsive, and scalable. Technology marches forward. Either you learn to run yesterday, or you are already behind the competition (you are).

At least, that’s what the folks in marketing would have you believe.

Performance is also a touchy subject for so many other reasons. When the numbers need to go up, sometimes it’s at the expense of the user experience. For example, consider the improvement in smartphone processors resulting in new ways to rapidly drain a limited amount of battery capacity. When you’re out of power, that phone becomes a much less useful technology called a brick. Performance misgivings can also happen when you buy into the wrong kind of performance, like writing throw away scripts in a language that prioritizes runtime performance above developer performance. Imagine a tool that takes one second to start running useful code and proceeding to call it thousands of times from a build system. I’ve done this. Not a good idea!

Here, I present a list and description of the performance dimensions that actually matter so far in my software engineering career. No flex, no marketing, no metrology. My hope is to separate the different kinds of performance that could matter to help you identify the right performance tradeoffs for your application.

Asymptotic Performance

This is the most common type of performance software engineers generally study in school and is probably the correct starting category because the problems we face are always growing, and the larger they grow, the more they are described by their asymptotic performance. This category asks how the application behaves when faced with large inputs. Does the solution scale to large problems? This doesn’t necessarily mean a large number of users, but instead it asks how the cost of the solution grows with the size of the problem.

Another way to describe asymptotic performance is the eventual performance of the system. For example, perhaps your runtime has a very excellent just-in-time compiler that produces extremely optimized code when the application runs for long enough. Eventually, your application will consist of only optimized code, and at that time you have the asymptotic performance of the system.

Initialization Performance

What if you don’t anticipate seeing 1 billion integers for your sorting algorithm? What if your solution is a tool that quickly validates a configuration file? How about handling a web request that you expect should always be small? In these cases, asymptotic performance won’t make or break your solution. We need another tool to understand performance.

Initialization performance describes how long it takes to get started solving the problem. This matters a great deal for web requests where search engines will judge your site based on time to first byte, and users will simply click away if the site does not appear to be loading within a second or two. It also matters greatly where in your solution you expect to initialized many times. If you’re paying the initialization cost repeatedly, that cost needs to be consistently small.

Consistent Performance

When rendering a frame in a video game–whether that be a graphical frame or a calculation frame for physics–it’s important to deliver on time and every time. Finishing faster doesn’t have benefits that the user will notice, but the user will definitely notice if there’s a spike where we take too long and fail to have the frame ready in time. This causes stuttering which is a significant issue for real-time interactive applications. In this case, it’s not so much the long time or absolute performance of your algorithm that matters but the consistency of its performance to ensure you’ll meet the deadline every time. The wrong choice of algorithm like garbage collection or high performance algorithms with rare but poor worst case behavior might satisfy other angles on performance but cause missed deadlines in the consistency sensitive application.

Note: Please excuse me as I struggle to find a better name for this distinct kind of performance.

Perceptive Performance

Up to this point I’ve mentioned the user experience several times. If there’s a way to improve the user experience that doesn’t actually increase the real performance of the system, the performance of the system is nevertheless increased as perceived by the most important part of the system!

For example, we can lazy load resources that the user doesn’t need to initially render a web page. This doesn’t change the total amount of data loaded and may increase data transferred in some cases, but it improves how the application feels. In first person shooters, the server grants a large tolerance for hit calculations based on the idea that users are more frustrated with hits that don’t register than hits that register but should have missed. In this case, you might not be able to improve the real accuracy of registration, but you can improve user satisfaction, which is effectively the same thing from the point of view of the user.

Developer Performance

With infinite time and resources, anyone could design and implement the best possible solution every time. In the real world, tradeoffs are necessary. In my experience, hardware is cheap but people are expensive. Learn and encourage using a scripting language or whichever language works best in the problem area. Invest in your tools. Make choices that take into account the real costs of what it takes to produce the solution. As an extreme example, it’s better to have a solution that’s at least an improvement over the status quo than no solution at all. The best possible program that you can’t deliver in time is irrelevant, and neither is the program that you can’t feasibly maintain if the customer needs to sustain the solution over a long period of time.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *