Trendspotter: Consumer Reports Report Card Opening the Floodgates?

September 15, 2010

If public scorecards are to be fair and to have the desired impact on consumers, they should be applied only to groups of physicians - a strategy that will become more effective as more and more doctors join larger groups. This will reduce the fear factor among doctors, and consumers can also be converted if they’re given financial incentives to choose doctors in higher-quality groups.

Physicians have long opposed the publication of "report cards" on their clinical performance - especially when it’s based on claims data, which is often outdated and erroneous. But Consumer Reports’ publication of performance data on groups of cardiothoracic surgeons promises to open a new era of accountability for physicians.

The report card on coronary artery bypass graft procedures uses risk-adjusted data from a database compiled by the Society of Thoracic Surgeons (STS) over the past 21 years. None of the data pertain to individual surgeons, and participation in the report card is voluntary. Even so, the willingness of 20 percent of the surgical groups and programs that contribute to the STS registry to be publicly profiled is regarded as something of a breakthrough.

In an article in The New England Journal of Medicine, Timothy G. Ferris and David F. Torchiana applaud the decision by 221 U.S. cardiac surgery programs to make their data public, calling it "a watershed event in health care accountability." There are two reasons why the cardiac surgery groups were willing to release the information, the coauthors note: First, "policymakers, health care purchasers, and patient-advocacy groups" have been putting pressure on physicians for years to submit to public profiling. Second, STS and Consumer Reports present the data in a scientifically valid manner.

Using 11 performance measures endorsed by the National Quality Forum, the scorecard gives the cardiac surgery programs one, two, or three stars, depending on whether they’re below average, average, or above average. The performance thresholds are designed to ensure there is a 99 percent probability that the programs actually fall into one of these categories. In addition, there are star ratings and actual performance scores (on a scale from 0 to 100) in four subcategories. These include 30-day survival (e.g., patients have a 98 percent chance of surviving at least 30 days and being discharged from the hospital), complications, use of appropriate medications, and surgical technique.

The publication of this data raises the question of how consumers will use it. Only a small percentage of consumers look at existing report cards, and even fewer base their choices of providers on them. While the reasons are complex, they include lack of awareness, low health literacy, distrust of the data, patients’ reliance on physicians to make referral decisions for them, and limited choices, such as the availability of only one cardiac surgery program in a particular area. In addition, a patient who needs an emergent procedure may not have the time or the ability to consult a report on local physicians or groups.

In cases where patients do have the time and the inclination, they might prefer information on individual surgeons to data on a group or program. But the publication of report cards on cardiothoracic surgeons in New York State does not support that thesis. A 2006 study showed that while CABG mortality rates were lower in top-rated cardiac surgery programs, they did not gain market share as a result of the report cards. Other studies suggest that some surgeons have avoided sicker patients or members of racial minorities to raise their ratings.

Report cards on primary-care physicians can also be misleading because of poor data and inadequate sample sizes. And, like the New York report cards on cardiothoracic surgeons, they might have unanticipated adverse consequences. For example, when one Michigan plan profiled primary-care doctors, some excellent doctors received low overall ratings because they might not have provided (or documented that they’d provided) smoking cessation counseling. A couple of physicians also told me that they feared some of their colleagues wouldavoid noncompliant patients to raise their scores.

If public scorecards are to be fair and to have the desired impact on consumers, they should be applied only to groups of physicians - a strategy that will become more effective as more and more doctors join larger groups. This will reduce the fear factor among doctors, and consumers can also be converted if they’re given financial incentives to choose doctors in higher-quality groups.

In addition, report cards should be based only on risk-adjusted clinical data culled from EHRs. Again, this will be more realistic when a majority of physicians have EHRs.

Groups should be required to submit the performance data. Eighty percent of the STS registry contributors did not participate in the CABG report card, and it’s likely that many of them are lower-performing groups. The public would be better served - and quality would improve - if consumers knew which groups those are.

Finally, the report cards must contain information that is simple enough for most consumers to understand, yet specific enough to be meaningful to them. This is an area where much more research is needed - as a glance at CMS’ Hospital Compare and other public report cards clearly indicates. Here Consumer Reports is the expert, and the kind of performance data contained in its report card on cardiothoracic surgeons looks like a winner.