Avoiding the Kill-Me-Now User Experience

killmenow

The Path to Flexible IT – Part 2 of 5:  In part 1 of this series, I discussed the importance of knowing who your users are.  In this article, I look at what it takes to make and keep them happy.

Quality of Experience

It’s a simple truth that businesses depend on their users.  These users, in turn, rely on applications and services to be productive and happy.

  • If the users are your employees or contractors, they want systems that give them access to the information they need, when they need it and, increasingly, on a variety of endpoint devices.
  • Your partners and suppliers want to integrate seamlessly into your operations.
  • And, most importantly, your customers want a great user experience.

Get all of these right and you are well on your way to success.  Get them wrong and problems are sure to follow.  Quality of experience (QoE) is a description of how well you’re able to meet the needs and expectations of your users.

The Old Way of Delivering IT

Back when dinosaurs roamed the earth and the mainframe was the only type of computer, information was centralized.  If you needed access, you prostrated yourself before the MIS priesthood who dwelt in the ivory tower and hoped your supplications would be answered – eventually.

Fast-forward fifty years, and you’ll find that many companies still take this approach to information technology.  They centralize all information processing in a single location – perhaps a headquarters data center – from which they serve the world.

Users located at or near headquarters generally experience high QoE.  Let’s call this the “Headquarters Experience.”  Because everything works well, HQ employees tend to be the most productive.  This fact can lead to the perception that HQ employees work harder and that users who complain about application performance are just a bunch of lazy whiners.

If we move some distance away from HQ – perhaps 600 miles (or 1,000 kilometers for those who prefer logical units of measurement)  – we may find branch offices.  Users in these locations usually get good QoE, but certain latency-sensitive applications such as video conferencing or chatty client-server applications may have some issues.  If branch office users happen to be visiting headquarters and report their experience, the typical response is for a HQ user to show them that the application works just fine and to question the technical competence of the complainer.

Going even further afield – let’s say 3,000 miles (~5,000 km) – we get the remote office experience.  These users get the inverse of the branch office experience: only a few applications work well and the rest are a pain to use.  These users may complain regularly about their poor QoE, often to little avail.

Finally, users who are the farthest away from headquarters – perhaps 5,000 miles/8,000 km or more – get the “kill-me-now experience.”  For these users, working for or doing business with your company is a constant struggle.  Nothing works and there is no love.

killmenow2

  1. Headquarters Experience – Users located in close proximity to where applications are hosted experience good application performance.
  2. Branch Office Experience – Still positive end-user experience, but certain applications may be impacted.
  3. Remote Office Experience – Negative end-user experience because of distance from application delivery.
  4. Kill-Me-Now Experience – Users find it very difficult to do anything.

How Good is Good?

How then do you define a good user experience?  Well, as management guru Peter Drucker said, “If you can’t measure it, you can’t manage it.”  So a key step in ensuring a good user experience is establishing objective targets for application performance.  For example, you might set a goal for web pages to load within 3 seconds.  For business applications, perhaps you want a transaction to complete within 5 seconds.  For video conferencing, you might set a goal of <1% packet loss.  The key point is that any discussion of user QoE needs objective targets.

The good news is there are tools available for measuring and monitoring application performance.  These generally fall within a category called Application Performance Management (APM).  The degree to which these tools can actually manage performance, however, will depend a lot on the underlying application architecture.  This is because no amount of code or systems tuning will completely make up for a slow network.  Darn those pesky laws of physics!

Different applications and service types will have different performance targets.  As illustrated in the following figure, some applications will work reasonably well even over very slow connections (think email delivery, for example), while others – such as streaming video – will be unusable if network capabilities fall below their performance threshold.

killmenow3

The important thing is to classify your applications and to set objective performance targets for each.

Time is Money

Getting users to agree to application performance targets isn’t easy – especially when the costs of improvements are factored in.  But the alternative can be far worse.  Technology giants including Google, Amazon and Microsoft all cite statistics indicating how much business they lose if user performance decreases.  That’s why establishing and maintaining user quality of service targets are  so important.  Remember, it’s far more difficult and expensive to win back unhappy users than it is to keep them happy in the first place.

Print Friendly


Related Content