Published medical practice performance benchmarks are appealing for several reasons. They:
• Quantify performance;
• Support an objective measurement of a performance gap between a given medical practice and the best performers; and
• Demonstrate achievable outcomes.
What’s not to like? The problem is not with the published performance benchmarks, but with the presumption that they can stand alone. Performance measurements without context are useless, and relying simply upon the reported data is ill advised.
Benchmarks vs. Benchmarking
In an ideal world, performance benchmarks are used in conjunction with the process of benchmarking, which Wikipedia defines as: The process of comparing one's business processes and performance metrics to industry bests and/or best practices from other industries.
The principle activities in benchmarking are:
1. Identify problem areas or unsatisfactory results in a subject practice.
2. Document the current related environment and processes in the subject practice.
3. Identify other industries or medical specialties that have similar processes or issues.
4. Identify the leaders, e.g., those practices with the most favorable benchmarks with respect to the areas or results of interest.
5. Study the leaders to learn what they are doing.
6. Compare the leaders' environments and processes to those of the subject practice.
7. Decide what changes the subject practice is willing and able to make.
8. Implement the acceptable changes.
Notice that the benchmarks are only a way to identify medical practices from which something might be learned.
Characteristics That Limit the Utility of Benchmarks
Self-selection — When performance measure reporting is voluntary, the practices that choose to report are those who are proud of their numbers. They are typically atypical in one or more significant aspects.
Sets of benchmarks give the impression it is possible to simultaneously excel in each measure – Exceptional performance in one measure often comes at the expense of less favorable results in others. Benchmarks, by themselves, do not illuminate the requisite tradeoffs.
Data vs. information –The numbers give no clue as to HOW a practice is achieving the results. The metrics by themselves do not supply actionable information, and the specific elements of superior performance are often counter-intuitive. There is evidence, for instance, that the most productive practices have more staff per physician than average.
Improving Performance Numbers Requires More Than A Declaration
Declaring that a medical practice will "achieve performance measures at no less than 80 percent of the top scores reported" assumes that the practice's performance metrics can be improved by fiat; staff just needs to make it happen. Significant improvements, however, also require sustainable, thoughtful change to environment and process. If the only change is in expectation, failure is inevitable.
Similarly, arbitrarily staffing to the "industry standard" without understanding how the situation and priorities of a particular medical practice may be unique is too simple an answer to a complicated question.
Benchmarks are useful in identifying what is possible. If performers with the best scores can be identified, studying those medical practices can be enlightening. Reported benchmarks are not actionable in and of themselves. Improving any performance measure requires thoughtful analysis and sustainable changes in operations. Simply imposing published performance measures upon a particular practice is not effective, and produces blame and frustration instead of improvement.
Find out more about Carol Stryker and our other Practice Notes bloggers.