EFFICIENCY: Know Yourself

February 1, 2006

Anyone can step on a scale, but knowing how to interpret the number staring back at you is a little tougher. Understanding how your practice is faring requires knowing a little about how the other guys are doing. That's where benchmarking comes in.


Anybody can step on a scale. The hard part is knowing whether your weight is the best you can hope for and what to do about it.

The same thing happens in business. In the grind of day-to-day business, you might know if you are making or losing money but have difficulty carving out time for the self-examination necessary for informed practice management.

Savvy practices use performance data, often called benchmarks, to identify where they are and where they’ve been on key measures like number of patient visits, gross charges by group and by physician, relative value units (RVUs), collection ratios and even clinical outcomes. It’s all part of understanding themselves and figuring out strengths and opportunities for improvement. But a number alone isn’t enough.

“Benchmarking identifies changes in an organization,” says David N. Gans, director of practice management resources with the Medical Group Management Association (MGMA), Englewood, Colo.

Gans stresses that for benchmarking to be useful, the practice has to clearly identify what it wants to measure. For example, an analysis of collections might target age of accounts receivables, percentage of accounts collected, by payer, and total percentage uncollected.

Elizabeth Woodcock, industry speaker and author, agrees, and counsels practices to think carefully about their benchmarking expectations. “Decide on a dozen performance indicators,” she advises. “These will depend on the particulars of your practice. For example, a practice with multiple services, such as lab ancillaries, nursing homes or home visits needs to have a service line analysis as part of their indicators, measuring both volume and profit.”

Compare inside and out

A number, no matter how carefully derived, however, is not a benchmark until the practice has something to compare it to. At Southern Indiana Pediatrics (SIP), a 12-physician, three-location practice in Bloomington and Bedford, Ind., chief operating officer Sandra DeWeese each month tracks total charges, receipts, overhead, hours worked, overtime and patient volume on a per-physician and per-practice basis.

“We watch trends,” DeWeese remarks. “A one-time measurement misses subtle changes” that have occurred. A not-so-subtle change that has recently occurred, she added, is Indiana’s shift of its entire Medicaid recipient population into managed care. As 43 percent of SIP’s patients receive Medicaid, the practice will be doing multipoint reviews of charting to ensure proper reimbursement.

A practice that wants to grow can use internal benchmarking measures to identify opportunities. “Good business measures to identify growth, or the lack thereof,” Woodcock notes, “include traditional indicators such as overhead, collections and receivables, in addition to other key indicators such as appointment access, new patient appointments as a percentage of total appointments and surgery collections as a percentage of total collections.”

A practice that understands its internal workings may also wonder how it measures up against the rest of the world. Although MGMA’s Cost Survey and Compensation and Production Survey are considered industry standards and provide a useful set of national benchmarks, they cannot, by definition, account for all individual practice variations and differences within markets.

“To get comparable practice benchmarks on a geographic basis, you’re going to have to pay a consulting firm,” adds physician practice management specialist Joan M. Roediger, JD, LLM, a partner with Philadelphia-based Obermayer Rebmann Maxwell and Hippel. “It’s difficult from the management side to get physician buy-in on that, particularly in small practices.”

In Roediger’s view, benchmarking should not be looked at as a “precise tool. It’s not going to be so specific as to measure XYZ practice down the hall.” Instead, she advocates using the MGMA reports to begin, then checking with local specialty societies, calling local real estate agents if measures such as rent per square foot are a concern, and even monitoring job posting Web sites such as Monster to get a sense of the salary and benefit market for various staff positions.

Suzanne Houck, president of Houck & Associates, Inc., a health care consulting firm in Boulder, Colo., says she asks practices, “Where do you hurt?” The answer may be obvious - appointment and office wait times may be unacceptably lengthy - but having a benchmark could help solve the problem.

“The average number of patient visits per year is 2.3 to 2.5,” Houck says. “Higher numbers are a red flag for inefficiencies.” Practices looking at wait times should also be aware of the per-provider clinical and support staff benchmark of 1.6 to 1.7, she adds.

Productivity problems

Achieving an “apples-to-apples” comparison is especially critical when practices attempt to establish benchmarks for physician productivity.

For example, physicians at the University of Colorado School of Medicine get about half of their salaries from patient care activities and have to meet productivity standards, according to John A. Sbarbaro, MD, MPH, medical director for University Physicians, Inc. (UPI), in Aurora, Colo., which represents the physicians. They must be promoted in seven years or leave.

“We can ask physicians, ‘Are you meeting your productivity standards,” Sbarbaro says, but adds the question is meaningful only if the productivity standards have been established so that physicians are measured “in a fair and equitable way.”

Ariel B. Alfonso, manager of managed care analysis at UPI, explains that the group quantifies each physician’s work RVUs, which measure the complexity of procedures specific to a CPT code, and take into account the physician’s skill, effort, and education.

Data from the Faculty Practice Solution Center (FPSC) in Chicago, a consortium that collects RVU data by specialty from about 100 U.S. medical schools, provide the benchmarks UPI uses to assess its physicians’ productivity, Alfonso says. Using FPSC data, he adds, also ensures that UPI physicians are measured against peers in academic, not private, institutions.

To account for regional variance in operating costs of medical practices, UPI applies the Geographical Practice Cost Index (GPCI) to procedures for Medicare patients. The GPCI formula includes the RVU for a service, adjusted for work, practice expense, and malpractice components. The resulting value is multiplied by a uniform Medicare conversion factor to establish a reimbursement benchmark for each market.

Alfonso and Sbarbaro are quick to point out that although they consider RVUs the gold standard for measuring productivity, they also consider the practice patterns of each physician.

For example, a UPI physician’s patient mix will go a long way toward determining his or her revenue, because physicians receive varying levels of reimbursement. Payer mix can account for large variations in the revenue produced by UPI physicians, but is not necessarily indicative of productivity, which is measured by RVUs and the E&M service generated by the physician, regardless of reimbursement source.

If information is power, DeWeese recommends seeking it and sharing it wherever and whenever possible. “In benchmarking for chronic disease, insurers and employers often have the best information,” she says. “We have a close relationship with the University of Indiana, and can call them and ask for data to help us understand how we’re doing.”

Gans concludes that “benchmarking doesn’t stop with an analysis of the practice or the market. The next step is to ask, ‘Do we need to take action? How’” Or, as Houck puts it, “What gets measured gets done.”

Tyler Smith is a freelance writer in Denver, Colo. He can be reached at editor@physicianspractice.com.

This article originally appeared in the February 2006 issue of Physicians Practice.