Email this Article Email   

CHIPS Articles: The Lazy Person's Guide to IT Performance Measurement

The Lazy Person's Guide to IT Performance Measurement
By Air Force Maj. Dale Long - January-March 2001
Welcome to 2001 and the true start of the New Millennium. This is the year a psychopathic computer is supposed to kill off virtually the entire crew of our first manned mission to Jupiter. If we actually had a manned mission to Jupiter, that is. At present, we're still talking about maybe getting to Mars by 2015.

Given the advances we made in space exploration in such a short time in the 1960s, was it unreasonable for Arthur C. Clarke to envision us reaching out to Jupiter 32 years later? But our state of the art, at present, is still looking at pictures of Jupiter sent back from unmanned probes like Galileo and Cassini. Have we failed, or are we simply succeeding in a way we didn't expect?

And that is the Question of the Day: How do we know if we're succeeding? In recent years Congress has passed the Government Performance and Results Act and the Information Technology Management Reform Act, both of which mandate the use of outcome-based performance measurements for measuring our return on investment for information technology (IT).

This article will focus on outcome-based performance measurements and how to show the value of a system in terms of mission outcome instead of system performance. Ciphering that puzzle requires a combination of disciplines and management practices, including strategic planning, cost accounting, performance measurement and statistical analysis.

But it will ultimately depend on us being honest with ourselves.

A New View of IT

Let's start with our biggest problem:

In classical economic terms, information systems are capital, producer or investment goods. They do not have any intrinsic utility or value in their own right. We acquire them because they can be used to produce or procure other goods and services, which in turn may offer us utility and value.

In short, IT investment provides no direct value. Information systems are essentially tools, much like bulldozers. A bulldozer has no intrinsic value on it's own. In fact, a bulldozer could be seen as a huge liability. It takes up a lot of space, is costly to move about and requires expensive maintenance and highly skilled, costly operators.

(Kind of like old mainframes.)

You can derive value from a bulldozer by digging a hole, leveling an old building or preparing ground for a new road. The potential economic value of a bulldozer is inextricably linked to the result obtained by its appropriate use.

The same principle applies to information systems. The value of an IT investment depends entirely on whether or not it makes us, the people doing the work, more efficient and effective. Spending money simply to improve a system's performance is meaningless; spending money on a system to improve the organization's performance is useful. Sometimes doing the former facilitates the latter, but it's not guaranteed.

Therefore, the value of an information system should be measured and managed by people focusing on business processes and practices, not the system itself.

The National Academy of Public Administration (NAPA) defines performance measurement as a group endeavor which seeks to improve performance and accountability of an organization, process, program, product, or service through the use of a performance measure process. The key steps they recommend being in a performance measurement process are:

•Agree on basic principles for mission, goals and objectives
•Brainstorm many ideas for measurements
•Select the best measures
•Take action, i.e., develop a plan and monitor progress
•Evaluate and calibrate the measures

The foundation of good performance measurement, therefore, is a good strategic plan.

Strategic Planning Made Simple

Most people dread being roped into a strategic planning exercise. You spend hours, days, weeks, and months cranking out a huge binder that then sits on a shelf somewhere until it’s time to write the next one.

It doesn’t have to be that way.

Strategic planning should be a pretty straightforward process. For some reason, however, people keep trying to reinvent it and make it “better.” I’ve seen many variations, both in theory and in practice; over the nine years I’ve spent dabbling with strategic plans.

The most important thing is to use consistent terms and definitions. The eight key terms below are my recommended set for strategic planning.

Current (As Is)
- Mission – Why we exist.
- Responsibilities – How we add value.
- Programs – Fulfill responsibilities.
- Activities – Tasks performed to execute programs.

Future (To Be)
- Vision – How we see the future.
- Goals – How we will improve value.
- Objectives – Define specific degrees of change to the status quo.
- Initiatives – Modify activities to achieve objectives.

The current list represents our “as is” environment. Without a good baseline of current activities and value, it’s impossible to tell whether or not a change in the status quo really increases the value of what we’re doing.

The future list shows what we need to address to produce any plan, strategic or otherwise. A vision, for example, should probably consist of one sentence of no more than 17 words that defines our ideal future state.

Goals are broad statements of what we want to accomplish and objectives define specific, quantifiable degrees of change from the status quo. Once we have decided what to change and by how much, we implement initiatives that change our environment to get us there.

Herein is the linchpin of any measurement system: if an organization cannot objectively define and measure its current performance, it has absolutely no chance of measuring the impact of any investment, let alone IT investments.

Good measurements also depend on first defining what your organization as a whole considers success. We typically describe success as the perceived value associated with the execution of our mission and realization of our vision. Here is a short exercise that illustrates this point.

Let’s say some of our goals are to improve readiness, increase retention, and improve quality of life. We take our baseline measurements and discover that only 50 percent of our system administrators are fully qualified for deployment.

One common misconception we often see in planning is that an objective can only map to one goal. This needlessly limits how we think about implementing strategy. We decide that our objective is to increase the number of deployable system administrators to 80 percent because we want to help achieve all three of our goals simultaneously.

Over the next year we complete training initiatives and achieve that 80 percent objective. The first and most obvious benefit is that we have more troops available to support deployed operations. In addition, we’ll improve the quality of life for our best-trained troops by reducing the number of times they have to deploy away from home. Finally, that quality of life improvement could prompt some of those system administrators to stay in the service, thus increasing retention.

We can accurately measure our readiness posture, the amount of time our troops spend deployed and retention rates. If they all go up, our training initiative has apparently succeeded.

There are two big mistakes people make when planning: (1) confusing responsibilities with goals; and measuring activities instead of outcomes.

The first one is very common, but it is fortunately very easy to correct. Look carefully at the verbs in goal statements. If a goals starts with: provide, ensure, maintain, protect, or any other term that we commonly associate with the status quo, it’s probably stating a responsibility.

Goal statements should always use verbs that imply change: improve, increase, build, reduce, eliminate, which all imply movement from the current state. There is, for example, a difference between the responsibility of providing a rapid deployment force and the goal of building a rapid deployment force.

Now that we’ve worked through how to define our current environment, and describe our desired end state, it’s time to deal with the problem of what to measure and how.

Activities versus Outcomes

When measuring IT performance we absolutely must distinguish between activities and outcomes. Here are two simple definitions: (1) an activity is something we do; and (2) an outcome is something we expect to see.

Let’s set another goal: improve customer service. Our objectives should describe our expectations in terms of speed of response; time to fix problems; correct information; lace of errors; customer satisfaction feedback, etc. We assume that if we respond and fix problems faster, provide accurate information, don’t make mistakes, and have happier customers that we have succeeded.

Do not, on the other hand, simply measure what people do. We can organize customer service representatives into teams, provide training on dealing with unhappy customers, and build an automated help call tracking system—without ever making any real improvement in customer service. They might be important activities that contribute to actual outcomes, but activities, regardless of their significance, are not outcomes that are meaningful to our customers or their missions.

Here’s another goal I see a lot in plans: reduce costs. On the surface, this looks like a worthy goal. However, this must be placed in context. There are three useful ways to reduce costs and one really bad one. If reducing costs is your only goals, then you can achieve it by firing everyone. Unfortunately, this leaves you with no ability to produce value. Cost cutting should never really be a goal in its own right without some reference to service levels.

Three good applications of this goal are:
- Provide more service for the same amount of money.
- Provide the same level of service for less money.
- Provide slightly lower levels of service for a lot less money.

That last application is a tough one, but we’re facing it every day. How much can we cut to redirect money into other non-IT areas and still provided minimally acceptable service? It might be useful at this point in the discussion to look at a more detailed example of a more IT-specific performance measure—and a bad one at that.

Zipped Measurements

Zippy got caught up in the performance measurement fervor over the summer. Never one to sit idly by while the rest of the world is exploring a new management practice, he plunged into outcome-based measurement with his usual combination of zeal and blind faith.

As usual, Zippy started with the grain of a good idea: measure the value of “connectivity.” He determined that there were three basic variables: number of people served (N), bandwidth (B) and cost (C).

His basic formula was N times B divided by C. In theory, this seems appropriate. If you can increase the number of people you serve, or the amount of bandwidth, without increasing cost—you’ll get a larger relative value. You also get a higher value rating if you can provide the same level of service at a lower cost.

Unfortunately, there are some huge disparities between the high and low ranges for variables. Some portable systems serve only a small handful of people, theater deployable communications packages serve thousands, and the Navy Marine Corps Intranet (yes, he tried to include that one too) will serve hundreds of thousands.

Zippy had similar problems with bandwidth and cost. In the systems he included in his study, bandwidth ranged from 75 bits per second to 100 million bits per second. Costs varied widely from system to system, too.

Then Zippy had an epiphany. In a rare moment of insight that transcended his usual cluelessness, he drove systematic misunderstanding to new heights by digging deep into his limited command of mathematics and added logarithmic functions to his formula.

Without going into the gory mathematical details, logarithms are a way of compressing numbers into very small spaces while maintaining some semblance of their original value. Using logs, Zippy basically found a way to compress values ranging from 1 to 1.6 billion into a range no larger than 1 to 10 and still maintain their original value relationships.

Now he was in this element—numbers that didn’t exceed his collection of fingers. A few more tweaks and his masterpiece was complete. Modifiers gave bonuses to systems that could transmit classified information and were portable. Penalties were assessed if system users wanted content, like photographic images, that the system couldn’t provide due to bandwidth constraints.

In fairness, it was actually a pretty good formula. For any combination of technology we could reasonably expect in the next 10 years, Zippy’s formula would return a relative value rating somewhere between 0 and 100. It looked on paper.

In retrospect, however, the only equivalent mathematical feat I can recall is when Jethro Bodine, from the 1960s television show, The Beverly Hillbillies apparently proved that 2+2=5 for very large values of 2.

Zippy’s formula suffered from two fatal flaws.

First, the formula produced exactly what he expected. It did that because he built it that way. It’s not the only time I’ve seen someone design a measurement that produced exactly what he or she wanted, instead of the actual truth, but it was the first time I had seen it happen accidentally. Zippy’s formula awarded values to systems based on his personal perception of where they should rank. It looked quantitative and objective, but was really just another exercise in subjective numerology.

Second, it failed the most crucial test of all: relevance. OK, so we’re paying a certain dollar amount to provide some people with connectivity. So what? It doesn’t necessarily mean they are actually doing anything with it. Zippy violated the “three Rs” of measuring outcomes.

First, be relevant. Measure what’s important based on your organization’s mission and responsibilities, not its programs or activities. Address customer needs, not organizational processes. While bandwidth, system availability, and response time may be important indicators of system performance; they are only mission outcomes for telecommunications companies, not the Defense Department.

Second, be realistic. Don’t generate numbers that compare relative values. Measure the real things like tons of cargo delivered, cost reductions, units produced or lives saved. Your basic four yardsticks should be speed or time; cost; quality or customer expectations; and the impact of your deliverables to customers or the organization.

Third, be reasonable. Never make the measurement more complex than what you are trying to measure. If you can make a good gut decision knowing the cost, bandwidth, and scope of competing connectivity projects, don’t complicate thinks by trying to compare them using math best left to physicists.

New Math

Now that we’ve spent some time on what not to measure, here are some things we should measure.

- Percent change in product or service quality. Measure the outcome of the mission activity, not the IT that supports it. Does the technology we’re investing in help us project power around the globe? Does it, for example, help improve the accuracy of our shooters, help us defend ourselves from our opponents or enhance direct mission support?

- Percent change in work process cycle time. Automating a work process should radically reduce the amount of time involved with producing a unit of work. Flight planning, for example, has benefited greatly from automated processing. However, do not speed up a process simply to make it faster. If your current process can churn out 20 units an hour and your maximum need is only 2 per hour, you probably don’t need to spend a lot of money on it no matter how ugly it looks or how old it is.

- Percent change in time to deliver a product or service. This is slightly different from reducing processing cycle time. Think of this one in terms of delivering information from sensor to shooter. Every second we can shave from the time a sensor discovers a threat or target to the time we deliver that information to the gunner or pilot has value. It can make the difference in the mobility through “in-transit visibility”—the ability to tell what is on every ship, truck, or plane at any given time. Knowing what’s arriving in your port and when can help significantly reduce the time it takes to deliver the most important cargo first.

- Percent change in lifecycle costs. If providing a system or service cost you $1 billion last year and only costs $500 million this year, you’ve enabled the reallocation of $500 million to other mission activities. Remember that the goal is to still provide good service while reducing costs, not just cutting personnel or budgets.

- Percent change in satisfied customers. This one is pretty easy. People either like the system or they don’t. If you make a change and they like it, you have probably helped them in one of more of the first four areas. Please note, however, that just because people uses a system a lot doesn’t mean that they are actually happy with it. One early attempt at automation was originally judged successful when the program manager announced that people were using the new system four times more than the old one. Unfortunately, this was because the system was so difficult to use that people made eight times as many errors. Listen when people complain; they probably have a good reason.

Where to Start

You can’t measure everything, so start with the important stuff. A major information system requires special management attention because of its importance to your mission; its high development, operating, or maintenance costs; or its significant role in the administration of your organization’s programs, finances, property, or other resources.

Specifically, a major information system will reflect one of more of the following characteristics.

  • Crosscutting. Those projects with shared benefit to and impact for more than one organizational component.
  • High Risk. Projects that by virtue of their size, complexity, or use of innovative technology have a high risk of failure.
  • High Return. Projects whose total potential benefits, in proportion to their overall cost, provide a significant mission value.

Start by selecting a few vital systems that are closely related to your core missions. It will be much easier to start a performance measurement program if what you are measuring has clear, direct links to your mission. Choose measures that are outcome-oriented, quantifiable, and demonstrate value.

Involve key stakeholders in the measurement process and educate them about performance measurements. Ask both system providers and users to tell you the value their system provides. If they can’t, or their answers don’t match, you have a problem…

Once you get good measurements for the first few systems, focus next on the most costly or troubled programs. Develop measures for them in the context of goal setting, management controls, plans, and budgets. This is where you will improve the value of performance measurement—not with systems where everyone understands their importance or systems with little mission impact.

After you have incorporated the mission systems and cleaned up the problem systems, then address what’s left. You will have had enough practice by this time to develop simple, effective measures that define success for a system based on one or two factors.

Most importantly, try to develop automatic measures that don’t require a lot of manual labor. Most organizations have some type of metrics or other periodic reporting system that reports key indicators of organization performance. Make your changes and watch these indicators. Don’t build a new measurement system if the existing one serves your needs.

Nothing will kill a measurement system faster than making large numbers of people spend a lot of time manually entering data they neither understand nor care about. Be prepared to accept the fact that a pet system just isn’t work what you are spending and either eliminate, modify or replace it. That’s the whole point of measuring performance, after all.

Conclusion

In the The Squandered Computer from 1997, Paul A. Strassmann, former principal deputy assistant secretary of defense for command, control, communications and intelligence wrote:

“The lack of correlation of information technology spending with financial results has led me to conclude that it is not computers that make the difference, but what people do with them. Elevating computerization to the level of a magic bullet of this civilization is a mistake that will find correction in due course. It leads to the diminishing of what matters the most in any enterprise: educated, committed, and imaginative individuals working for organizations that place greater emphasis on people than on technologies.”

Business computing is nearly 50 years old. For most of that time, information system performance measurement has been perceived as essentially a technical problem. While technical capabilities are easier to measure, what we must develop is greater understanding of the business and management value of information systems. These systems are the virtual conduits that carry information from person to person. How they operate has a tremendous effect on how effectively we carry out our missions.

The value is there. We must simply become better at describing it. The best example I can offer is that of President John F. Kennedy, who set this goal on May 25, 1961: “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”

And we did. All the activities that contributed to the Apollo missions were validated by the outcome. The trick is now to make measuring what IT does for us as clear.

Major Long is the chief, Strategic Plans & Requirements Directorate of Communications and Information, HQ Mobility Command. He holds a Master of Science degree in Information Resources Management from the Air Force Institute of Technology.

The views expressed here are solely those of the author, and do not necessarily reflect those of the Department of the Navy, Department of Defense or the United States government.

Table 1 shows eight key terms for strategic planning.  In the current column, describing how things are now, are the mission (why we exist), responsibilities (how we add value), programs (how we fulfill responsibilities) and activities (tasks performed to execute programs).  In the future column, describing how things will be, are vision (how we see the future), goals (how we will improve value), objectives (define specific degrees of change to the status quo) and initiatives (modify activities to achieve objectives).
Table 1.
Related CHIPS Articles
Related DON CIO News
Related DON CIO Policy

CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

Online ISSN 2154-1779; Print ISSN 1047-9988
Hyperlink Disclaimer