EHR Superstars and the Rest of Us: Beware the Sampling Error

May 30, 2011

When "slow adoption" is questioned, what is being asked is why ordinary doctors haven’t done what the superstars have done.

Barring a disability, everyone can walk. But climbing to the summit of Mt. Everest involves walking, and then some.

Not everyone can do it and of those who could, not everyone is willing to make the commitment and exert the effort. Of those who try, some die in the attempt, often as a result of faulty decision making. Those who reach the summit and return to tell about it are genuine superstars. Imagine that you are a space alien and you want to understand the limitations of human walking. If you capture a group of those who had scaled Mt. Everest, any study results or conclusions that you generate might be very interesting but they would not tell you much that was relevant to understanding the limitations that the average human encounters when walking. Your sampling error would skew your results and would you conclude that there were few limitations.

Any physician that has done research or read research papers is aware of the sampling error. It is a criticism that is often raised about scientific studies and it is often justified criticism. Even with good planning and statistical consultants, it's easy to design a study that has residual sampling and technique errors. This is compounded if there are factors that the researchers were not aware of, which would have influenced the study design had they only known.

A great example is that of a biologist studying limb regeneration in amphibians who produced some truly extraordinary results. The problem was that no one could reproduce the results no matter how closely they followed the published techniques and methods. One biologist was so perplexed that he went to personally observe the procedure. The bottom line: The original researcher was a chain smoker and had been blowing tobacco smoke on the preparations throughout the study but was totally oblivious to the fact that it might affect the outcome or even that it should be mentioned.

Let's recall that many physicians have spent the better part of the past 20 to 30 years making attempts to implement and adopt EHRs. Many have made multiple attempts and have spent vast amounts of time and money. Out of this group, a few have had sufficient success to be noteworthy. These are the superstars.

When "slow adoption" is questioned, what is being asked is why ordinary doctors haven’t done what the superstars have done. The answer is, of course, that they are not superstars. They have watched the superstars and noted the number of failed attempts and the expense of the "successes" and correctly concluded that the technology was not ready for prime time.

The current program to incentivize the adoption of EHR is based on the kind of data that the space aliens would have gathered from the Mt. Everest climbers, which suggests that it may take a bit of work but it is basically no problem. Will a bribe make someone successful when there no data that would predict success? Would $40,000 induce an average person to climb Mt. Everest? Would they succeed if they tried? It's more likely that they would die in the attempt or be sent back from the first base camp.

The expectations of the legislators and the politically connected IT types that are fanning the EHR flame are based on flawed data caused by sampling error. They have concocted a program based on the assumption that it is only $40,000 or so that stands between the average doctor and superstar success. It would be more productive to direct the money toward studying the failed attempts to identify the root causes, i.e. the intrinsic flaws in the programming languages, databases, and application-development tools that have led so many bright developers to produce software that has failed so badly in spite of their best efforts.

Learn more about Daniel Essin and our other contributing bloggers here.