06015 Performance Anxiety: Improving Product Performance

One aspect of a software product which every company that develops software struggles with, all with very little outside guidance and standards for support, is product performance. A user's impression of how fast your product moves from screen to screen, how quickly it calls up lists of records, how seamlessly it performs tasks when you click a button, all this forms one of the cornerstones of opinion about the quality of your product.

Perhaps even more than other aspects of their job, Product Managers must wrestle with how to approach product performance on their own, with few standards in a world of quickly changing technology. As they work with Customer Service to determine the severity of the impact on customer satisfaction, and with Development to determine what can realistically be done, they have scarce objective advice to go on.

Read on for some advice, based on 17 years of experience working with software, on how to approach the performance tuning of your software product so that you build a reputation for quality and satisfied customers.


[private]

Right to the Heart of the Matter

From the inside of your product looking out, it's easy to be philosophical about slow performance. You work with the whole product, you've seen it at its finer moments, and you realize that there are bound to be areas where performance is slower than others. You take the larger view and tell yourself it's okay.

But that's a dangerous perspective if you want to keep customers happy. Many users of your product spend much of their time working in only one small area. Or as part of making use of your product on a grand scale, a user spends certain hours and days mostly entering data on one screen, perhaps creating long lists of maintenance data that they will later use for transactions. And if the screen that a customer is using for hours or days is slow, it quickly becomes exasperating, and it's all they can see about the product.

So product performance cuts right to the heart of the matter when it comes to customer satisfaction and judging overall quality. It's a common pitfall for a software organization to believe that performance is not a serious consideration. Development focuses on incredible new features, only to be caught off guard by a groundswell of negative reaction from the grassroots users. Performance is a key factor, and frequently enough the main factor, in your customers deciding upon the quality of your product.

A Morale Issue – Customer and Internal

You can see from the description above how easily performance can become a customer morale issue that tips the balance as far as perception of the product and its quality. But system performance can also become a severe morale issue inside your own organization. Nonspecific but very negative complaints, sweeping criticism of performance without investigating whether the issue was due to hardware, your software, or the customer's environment, these can leave developers feeling targeted and demoralized.

What this means is that your company needs to conduct a PR campaign to address product performance, and the responsibility for this falls squarely on the shoulders of the Product Manager. You play a vital role by working diligently and tirelessly to counter unfair perceptions and put performance issues into perspective, both with customers and with your teammates.

Your public relations campaign will include such activities as setting the expectation that performance issues will occur, and when they do it doesn't mean that your product is of poor quality. It will also aim to help people focus on quickly identifying, diagnosing, and responding to performance issues.

The Tipping Point: Customer Mass

In my experience, performance issues only begin to be a factor in the product quality at a certain point in the product lifecyle. That point is when you reach a certain mass of product usage. That means both a certain number of customers (especially for a consumer product) and a certain volume of activity and data (particularly for a business product). It's when many people are using the product every day, or there are thousands and tens of thousands of such records as customers, parts, or transactions, that small differences in the efficiency of the code add up to noticeable problems with slowness.

If you provide your product on a hosted system, where many customers share resources such as databases and servers, performance problems can be compounded. A hosted system is basically a gigantic customer system, with more data, transactions, and users than any system for a single customer. With a hosted system, your product can quickly reach a scale that magnifies small performance inefficiencies in the code.

When you have a product that is selling well, with a growing customer base, be on the lookout for the arrival of this tipping point. And as Product Manager, be ready to help your organization engrain performance tuning into its ongoing activities and skill sets.

You Will Always Be Tuning Performance

It is essential that your organization reach the understanding that you will always be tuning performance. It is an ongoing effort, because as the product, user base, and data in the product grow, new performance bottlenecks will rise to the surface.

If you do not set the expectation that performance tuning will be ongoing – and should be so – your organization will continually treat performance problems as an exception, only reacting to them when customer complaints reach a certain noise level, at which point customers are usually quite displeased. Because you waited so long, you'll be playing catch-up, rather than staying alert for new issues as they arise, and addressing them quickly.

The other downside of not setting expectations properly is that you magnify the negative effect (in terms of morale and customer satisfaction) of performance issues. When a customer reports a performance problem, the hotline reps treat it as an unexpected failing of the software. Rather than anticipating the call and being prepared with a response that will facilitate a fix, your organization reacts poorly in front of the customer and departments point fingers at each other.

If, on the other hand, your organization expects that performance tuning will be ongoing, it reacts very differently. The support line reps are well trained in how to take the calls (and how to gather actionable information), quickly classifying the problem and explaining how it will be handled. Development has time set aside in the schedule for fixing performance issues as they arise. And QA is prepared with ways to test and measure performance improvements.

Is It Customer Expectations or Quality?

Dissatisfaction with performance can just as easily be due to unrealistic customer expectations as it is caused by actual problems with quality. User expectations are set by office productivity software that sells billions of licenses, where the software maker can afford to invest millions in improving performance. Or else users compare your software to simpler, single-function tools they find on the web to shop online or calculate mortgage payments. It simply may not be realistic for your software to perform at that same level, and the advantages your software brings to your customer's organization far outweigh performance concerns.

That is why I spoke about the idea of a PR campaign. PR campaigns usually involve not only the communication of improvements and good news, but also resetting expectations so that your audience is more accepting of the current situation. If you do not help guide your customers towards more realistic expectations, you will find yourself fighting a losing battle where anything less than split-second response is not good enough.

Improving customer satisfaction by creating more realistic expectations in no way relieves you of the need to improve your product's performance. If you do not make an effort to improve your product, its performance will inevitably become worse as data and number of users grow. But you are unlikely to satisfy customers if you do not work on the expectations side of the equation.

New Technology and Open Technology

There was a time when software companies controlled and mastered the hardware, software, and all the technology used to build their product. But the move to web-based software has brought in a whole set of newer technologies offering less control over performance, or simply poorer performance, in exchange for easy integration of components and accessibility over the Internet.

Open technology has created a whole set of software components that can mean your Development team can quickly incorporate new capabilities without building the functionality from the ground up. But with that speed to market comes a certain loss of control. The product is not developed in-house and may not be something your developers can fine-tune to any significant degree.

I would argue that these developments have resulted in software that probably has a slower response time, if measured in seconds to process transactions, than the software products we had years ago. This is balanced by the fact that these new products are easily accessible 24/7 from any web browser, and the convenience to users and centralized maintenance carries the day. But this new technology does mean that development teams struggle harder with performance issues.

If It's Not Your Product, Is It Your Fault?

Products created years ago could control not just software, but the whole system including the hardware and network it ran on. They could work with all the variables to make sure that performance was optimized.

But the performance of a web-based product today depends upon many variables, some of which are beyond the control of you as the maker of the software. Among these are speed (or lack of it) of internet traffic, customer environment variables that slow down traffic, the path which pages take across the internet and through a customer's network, and router and browser settings.

We have often had customers report slow response time on our product, only to try it out ourselves, with a copy of the exact database, to find that a response time at the customer's location of 120 seconds was a mere two seconds on our own network. The problem often turned out to be network settings that caused the slowdown. To fix such problems requires technical personnel on your side who are well versed in network settings and can walk through them with one of the customer's IT staff. In some cases the customer's IT department fought any suggestion that their own network was a factor in performance, refusing to review settings, and when those setting were finally proven to be the cause of the slow performance, refusing to tweak them.

That is an extreme example, but there is an abundance of similar though milder examples of slow performance due to factors under a customer's control, and not yours. Which begs the question: "If it's not your product which is the cause, is it your fault?" To which I would respond that while it may not be your fault, it's still your customer satisfaction issue to deal with. It requires patience, calmness, and perseverance to uncover performance problems stemming from customer environments and walk the customer through improving or removing them.

How Do You Assess Performance?

Another important piece of the puzzle is determining how to objectively assess and measure performance. Unless you are successful in establishing objective measurements, everything remains a matter of hearsay and opinion, from customers and internal sources alike.

The first pass at measuring this will involve having your customer time response time in seconds after they press a button or perform and action. Then you need to do the same in your environment, ideally with a copy of the data.

Once you have established a consistent time in seconds to complete a certain action, you can apply other tools to measure the components of this time, looking at database and network response, something you may be measuring in milliseconds or microseconds.

The initial measurements are your before numbers. These numbers are something you can take to Development and to your system and database admins to begin working on improvements. As each tweak or change is implemented, measure the same components to obtain your after numbers and to assess whether a change has made a positive difference.

There are many more details involved in measuring the components of performance, but the key piece of insight here is to understand that you are required to drill down into these details if you truly want to understand and improve performance issues.

One Small Step For a Program …

… But a Giant Leap in Performance For Your Product.

While there are many factors that might contribute to a performance problem, and all of them could be tweaked and tuned, there may very well be a single change or two, either to the code or a network setting, that leads to most of the improvement. Your developers are better off making one or two changes at a time and measuring the results, stopping when improvement is significant, in order to save time.

On the other hand, I have often seen performance situations where significant improvements required many changes. Each change leads to an incremental improvement, and together the changes add up to a significant improvement.

Because of this, performance improvements require a close collaboration between developers and testers, often testing and measuring progress after each change, in order to effectively improve your product in the least amount of time possible.

Just the Facts, Please

The final challenge you face with performance tuning is really what starts it all, namely how to handle and process performance issues as they are reported in order to define them and feed them to Development. It is critical to collect input on performance from customers that is specific enough that Development can take action.

Coach and train all customer-facing team members in what information to collect when a customer reports a performance problem, or when internal staff encounter one. Performance-related input is often frustrating vague. "The product is too slow" just doesn't give you information you can do something with.

When a customer speaks about the product being slow, the key information to gather is what, specifically, they were doing that was slow. Were they adding a new employee? Were they using the Copy button on the Reports list? Were they using a specific search field? That's the first piece of information.

The next piece of information is how consistently the problem occurs. Is it slow every time? Is it fast some times and slow others? This can give developers some clue about what factors might be involved.

A third piece of information is a rough measure of the problem. Is someone complaining about a two-second response, or a 4 minute one? If the problem is a two-second response, some expectation setting is in order, but it also can't hurt to have a developer check the code to see if any obvious improvements can be made.

Finally, training your team members to collect performance information includes an important element: coaching them in how to help set customer expectations about what performance improvements are realistic, and about factors outside of your company's control which may be involved. This is part of the PR campaign that aims to let your customers – and your teammates – know that performance issues are to be expected as the product grows, that your organization is ready and willing to address them, and that they can expect to see ongoing tuning and improvements to make your product better as it reaches new milestones in its development.

— Jacques Murphy, Product Management Challenges

ProductManagementChallenges.com

[/private]

Comments are closed.