Today's computers, including EHR systems, often can't tell "no" from "not applicable" when selecting data for quality reports. Is this really helpful to physicians?
Last week I discussed the fact that today's computers can only handle very simplistic logic. This is unfortunate because most questions that arise in medicine are rarely so simple that they can be correctly represented by a simple true or false. Of course, it is possible to "work around" this limitation by using a whole series of true/false questions. Does the patient smoke? Did you ask whether the patient smokes? Did the patient answer something other than yes or no? If so, what was it? Did the patient refuse to answer? Was the patient capable of answering?
Most of these nit-picking questions are rarely important to a physician reading a chart since the answers can usually be inferred from other portions of the note or from other chart entries. Today's computers are, however, not capable of this sort of inference and since the ambiguity that may attend a simple question about smoking cannot be captured in a single datum it is, for the most part, dispensed with.
For analyses and reports to be meaningful, the denominators must be not only accurate but understood. With a simple yes/no smoking question, determining the denominator can be a challenge. If it had been the case that the answer to any of the supplemental questions would have been remarkable, that patient might or might not have been eligible to be included in the denominator. Since those extra questions are rarely asked, ambiguity usually ends up as a no. Is this appropriate? Who would ever know?
When it comes time to prepare a report, the computer will always produce a report. It will be dutifully submitted. The fact that the requirement to submit a report was fulfilled will be noted. The reported results will be assumed to mean whatever it was supposed to mean. Decisions will be made and actions initiated and whether they work out as expected or not, it will be difficult to know whether the outcome was the result of chance or the result of accurate information.
The problem is not restricted to yes/no questions. The way computers work is that data is stored in "cells" within the memory. Each type of data that the programming language allows uses a pre-defined number of cells to hold each datum. For example, an integer may use two cells.
Let's consider a data element such as body weight that will be stored as a decimal. When the program starts, the memory cells allocated for body weight must either be initialized to contain something or nothing. There is obviously no rationale for initializing the cell to something, because how would you decide what that something should be: 10, 50, or 100? The seemingly logical choice is to initialize the body weight memory cells to nothing, but nothing in a decimal cell is indistinguishable from zero.
But wait - no patient has a weight of zero! The cell should be initialized to "NOT WEIGHED" but it can't be - this is a decimal cell.
Instead, it is assumed to not be a problem because when the patient is weighed, an actual weight will be entered and the cell will no longer contain a zero.
But wait again. What if, for some reason the patient is not weighed? It may not have been indicated. Perhaps the patient weighed more than the capacity of the scale or was in a wheel chair and could not stand on the scale. Under these conditions, when that record is stored, the body weight memory cells will still contain that initial zero, but what does it mean? It's anyone's guess.
By the way, those true/false cells all get initialized to false when the program starts, so for each one that does not get a "yes" entered into it, they all get stored as false. Again, no one will ever know whether the question was asked and the answer was false or if the question was never asked or if the clinician forgot to make the entry.
Lurking ambiguity turns denominators into meaningless mush. That is no reason not to comply with reporting requirements, but it may be a good reason not to impose them at this time. It is also a good reason to be very skeptical of "quality" reports and the so-called evidence that is behind some "evidence-based" medicine.
Find out more about Dan Essin and our other Practice Notes bloggers.