06016 More Performance Anxiety: More About Performance

The previous article on Improving Product Performance must have hit home, because I received more responses to it than any other issue. This reinforces my belief that product performance is an area where there is very little guidance out there.

Not only do developers struggle with performance and scalability, often flying blind, but it spills over into product management, where it's a struggle to figure out where performance issues fit in with the product roadmap and requirements.

Readers made some valuable points and asked some useful questions, and so today's article provides a little more about improving product performance so that your software remains competitive.

Many thanks to those of you who wrote in!


[private]

Good Performance Must Be Baked In

If you want your software product to have a good level of performance and scalability, both of these have to be designed in from the start. Performance is not something you can leave out of the design and try to make up for later. Both have to be part of the basic architecture of your product for you to be able to grow them as the use of your product increases.

However, excellent performance and scalability in an initial design will not change the fact that your product and customer databases will grow over time in unpredictable ways, causing previously unseen performance issues to surface. This is all the more true if your product architecture accommodates multiple customers and databases that share system resources, code, and processes. A product designed under the assumption that the employee table would never go above 15,000 will probably experience problems when you sell to a customer with 40,000 employees.

The fact that new performance issues will inevitably surface in a product that was well designed from the start often comes as a surprise to your organization, and the accusations fly. When performance problems crop up, it doesn't necessarily indicate a flaw in the overall product design and architecture. Rather, it's a natural part of your product's success.

Helping everyone in your organization and in your customer base see performance issues in perspective is an important part of the PR campaign that a Product Manager is uniquely qualified to conduct.

What Exactly Constitutes Good Performance?

One of the questions that came up from readers is: "What exactly is good performance versus unacceptable response time? How do I know what to aim for?" That's a really tough question to answer.

In my experience, everyone's first expectation is subsecond response time, ideally so short that you don't even notice any wait or delay. By designing your product for such performance from the start, you have a chance of achieving this. But such fast response may not actually be possible for your product. You will need to run performance tests early and often to see what kind of response times you can actually achieve. Next, work to reduce those times which are longer than a second, until you reach minimum times that don't respond easily to further attempts at improvement. These are your benchmarks.

Once you have the actual benchmarks for your product, it's time to conduct a good PR campaign to set expectations for performance and response time with the customer base and your own organization. Unfortunately, your choice is to set expectations or to have them set for you. Without guidance from you, everyone will naturally expect performance that is so fast that you never even notice.

However, increased use of your product, especially in a hosted environment with shared resources, will lead to situations where you don't meet the expectations you have worked to set. That's where your organization needs to be set up to respond quickly to performance issues, defining them and then fixing them to the best of your ability.

How Do You Measure Performance Improvement?

Testing and improving performance boils down to measuring the time it takes for your software to process data or change screens. When performance is most problematic, you measure it with a watch with a second hand (or a minute hand when it's that bad) or a stop watch. First you run through an operation in your product multiple times, running on various hardware configurations and using various volumes of data, to obtain average, minimum and maximum times. Then you measure the reduction in seconds after improvements are applied.

Those times from the end user perspective need to be further broken down into components. Break out the time it takes to process a transaction on a server from the time it takes to send information over the network. You may make some surprising discoveries, such as realizing that your database processes a request instantly, but it takes 20 seconds to get across a customer's network to their web browser. Such a discovery points you away from performance tuning and towards tweaking settings on a specific customer's network.

You may also find yourself measuring the use of system resources in an effort to determine how a given action in your software makes use of system resources. While this is not something that your users see, these measurements can help developers pinpoint what aspect of the product is slowing performance.

What About Requirements and Specs?

Finally, there's the issue of how to define requirements and specs for performance improvements. The challenge here is that the details of performance tuning are likely to be beyond a Product Manager's level of skills. They are best left to developers, system admins, database admins, and network admins. The most effective assistance a Product Manager can provide for performance is to help Development expect to carry out ongoing performance improvement work, allocate time for it in the project plan, and identify the team of people who should be involved in order to make sure you're considering all factors in performance and scalability. Performance tuning is more like a bug fixing or maintenance team.

The team, in addition to those specialists mentioned above, must include QA testers and the expectation that everyone will work together in an iterative manner to test what may be a number of small improvements which add up to a lot.

One further area where a Product Manager can be of help is locating customers who would be willing to test performance enhancements for those situations where you do not have access to actual customer databases with representative volumes of records or transactions. Only when customers test out performance in their own environment can you be sure that you have achieved acceptable response times. Often, too, when you get system and network admins from the customer working together with your developers and admins, their combined expertise achieves great results.

Does It Ever Stop?

I don't think performance tuning ever stops, and in many cases the results you achieve in a single release are not as good as you would like, leading you to keep trying to improve specific operations with each release. Product Management plays a vital role in all this by setting expectations and helping to organize the effort to improve your product performance.

— Jacques Murphy, Product Management Challenges

ProductManagementChallenges.com

[/private]

Comments are closed.