Monday, September 5, 2011

Measuring Performance in Government Research Agencies

(A note retrieved from my fav portal) - for future reference:

Private companies exist to make money for their shareholders, owners and employees. They may have other purposes, but profitability is always a primary goal, and an indicator of success. Profitability is correlated to Return on Investment (ROI) and productivity, or revenues minus total cost of ownership (TCO). These figures are tracked carefully and they are all measured in the same currency units, so at any time it is easy for managers to determine the net worth of the company.

Public and not-for-profit organizations exist to support a particular mission. In the case of the Government, there are some 'inherently governmental functions' that are defined or implied by statute, and they may not be done by the private sector. Although they may be 'protected' and even subsidized, in this era of deregulating and downsizing of Government, many agencies have an increasing concern about their 'competition'. Public organizations often do have opportunities and competitive threats, just as private companies do (except for police, armed forces and a few other organizations that are still protected from competition by law).

So most Government agencies must create a credible impression that they are the best at what they do, and that they do it efficiently. For if that is not the perception, then their sponsors or customers may go elsewhere for their services, leaving the organization to wither and die.

The Government Performance and Results Act requires all Government agencies to measure their performance using metrics and measurements. Systems such as the Balanced Scorecard attempt to define metrics that will provide leading indicators of the health of any organization, public or private. But because public organizations do not have financial results as the primary indicator of success, some other metrics must be put into this place.

I suggest that these metrics are mission effectiveness and efficiency: to accomplish their mission in the best way, within minimum time and cost. Success is indicated by the extent of their effectiveness and efficiency in the eyes of their customers and sponsors.

For example, in the case of a public electrical utility, effectiveness would be the fraction of time that the electric grid is powered. In the case of police, effectiveness would be measured by the crime rate. In the case of the Social Security Admininstration, it would be the reliability of fund distributions. These are cases where the mission is clear, and its effectiveness is relatively easy to define and measure.

Peformance Measurement Challenges in Defense R&D Labs

The task is somewhat harder for military agencies, where effectiveness can only be truly evaluated in times of war. Other substitute metrics, such as effectiveness in mock battles or effectiveness of weapon systems in tests, may give some indication of probable effectiveness, in terms of 'readiness'.

Scientific research is an area for which effectiveness is extremely difficult to assess. Products of basic research often emerge only after many years of development. Most of the time, such research leads to nothing of significance, but occasionally there is a fundamental breakthrough in knowledge that opens up new vistas of technology, such as the transistor or the laser. The only way that has been found to evaluate the quality and 'performance' of such research is by peer review. This is labor-intensive and still cannot identify long-term potential effectiveness in most cases.

Even in the case of applied research focused on specific products, there is little ability to guarantee future success. In the case of ships, for instance, the development cycle is 10 to 15 years. There is an inherently long cycle time for evaluation of their effectiveness as weapon systems, even if only in terms of readiness.

Military R&D labs are in the worst of both worlds now: they are doing many missions that are not inherently governmental functions, so these missions are not sheltered from competition by law. On the other hand, they are doing missions for which it is extremely difficult to measure effectiveness. And if effectiveness is difficult to measure, then how do we know whether they are being done well? And how do we know what should be improved? This is a profound challenge facing the military research labs at this juncture.

Despite this inability to measure effectiveness, it is evident 'by inspection' that current US military superiority is no accident; the military research labs are historically the source of most of the innovations that lead the world in military technology. So although effectiveness of these labs is difficult to measure, it is probably satisfactory. The 'worst of both worlds' predicament may not be as problematic as it seems. On the other hand, we don't want to rest on our laurels. Global competition is rapidly growing -- witness the recent explosion of nuclear devices by two additional nations.

But there is still another way to view the whole matter of Government R&D lab performance.

Efficiency as Effectiveness

Is effectiveness of R&D work also dependent on efficiency of that work? This seems to be the case. For instance, if a single process cycle time (an efficiency metric) is shortened, there will be a shorter feedback loop around the entire manufacturing process. This can increase effectiveness because outcomes and feedback are obtained quicker, and more decisions and revisions can be made to improve the product, without adding time or cost. In fact, if we consider the lab's effectiveness by itself (without combining it with product effectiveness), then increasing efficiency, i.e. productivity, probably directly increases effectiveness of every aspect of the product design, including its effectiveness. In other words, in the government lab context, efficiency is effectiveness.

If this is the case (or even partly so), then we have arrived at an easy way to predict long-term effectiveness: measures of productivity and efficiency, such as process cycle time, process cost, fraction of value-added time, rates of information transfer, etc. Most of these metrics are easy to measure; the most basic one is process speed, or process cycle time. This will drive process cost and affect other aspects of the mission.

Hence we arrive at the hypothesis that mission effectiveness is directly correlated to process speed (all other things being equal).

Measuring and Improving Efficiency

Process speed in the laboratory environment includes not only mission processes, but also the ordinary, "routine" support processes such as meetings, memo writing, report distribution, mail handling, etc. And there is a lot of potential 'fat' to cut out here: one measure is the level of overhead, i.e. the ratio of mission (direct) funds to support funds. Currently that ratio is about 130%. This is a key efficiency metric that cannot be reduced significantly by across-the-board RIFs or cuts. It will yield to business process improvements, though.

Many 'best practices' of the private sector for such processes are available for emulation by the Government. Candidates for process improvement may be identified in several ways:

  1. By inspection - simple observation and comparison of practices may be sufficient to uncover significant shortcuts in antiquated processes that have never been scrutinized before. This method is quick and simple, and can be fully sufficient for picking the 'low-hanging fruit', i.e. the old processes that have never been reengineered.
  2. By measurement - baseline data can be collected and 'benchmarked' against private-sector work of the same generic type, such as engineering review meetings. This method has the advantage of providing a 'business case' or ROI calculation, which may aid in providing an incentive to managers to pursue improvements.
  3. By BPR analysis - flowchart the AS-IS process, and examine the chart for opportunities for simplification before adding automation and technology. This is the 'traditional' approach, but it can consume substantial resources before anything has been improved, so it should be used sparingly.

A Caveat

We have argued that cost reduction is a key goal of government research labs, as well as many other nonprofit organizations.  However, this does not automatically imply that ROI calculations will be accepted and used as the basis for prioritizing improvements, as commonly occurs in the private sector.  In the latter case, financial benefits are all equivalent at the 'bottom line'; once they can be estimated, it is a straightforward task to identify where to work on efficiency improvements.  However, in the government economy, there are at least two kinds of money: actual expenditures for support (overhead) and potential savings based on improvements.  Even though the potential savings may show a large ROI, it may still cost more to implement the changes in actual overhead.  Since productivity or Total Cost of Ownership (TCO) for direct-funded or mission-related work is difficult to determine, the cost reductions are only potential or 'invisible' within the agency.  Once again, we may be stymied by the nonprofit economy.  Hence it may take courage and leadership to spend a little real overhead in order to save a lot for the taxpayers.

Conclusion

Despite the fact that it is difficult or impossible to measure long-term effectiveness, it is hypothesized that improvements in these "mundane" processes will accelerate decisionmaking speed, productivity, and hence mission effectiveness. Granted, I have no proof of this hypothesis. But let's not let the trend toward performance metrics preclude us from using our basic engineering judgement. After all, the ultimate issue is not how to measure performance, but how to improve it.  The Balanced Scorecard can help to raise the visibility within an agency so that their leaders can make decisions about improvements with increased confidence that it will affect the total cost of ownership.

No comments:

Post a Comment