There is a vocal group of knowledgeable people that are getting pretty fed up not only with EHR software, but software in general and many are quite vocal.
• Margalit Gur-Arie's complaints can be summed up as: "The user interface in any software product is the easiest thing to get right. All you need to do is apply some basic principles and tweak them based on talking to users, listening and observing them… [Usability] doesn’t really matter… it’s not about the buttons and the clicks, it’s about what the buttons do."
• Fletcher Penny MD, an ER doc and programmer says: "Watching various hospitals spend money on this [EHR] crap, and force physicians and other healthcare workers to use it causes a small piece of me to die inside….". He then goes on to implore vendors to improve their user interfaces.
• Among the scientific community there is growing concern with software defects. As the result of a small program error, "a number of papers had to be retracted from the journal Science." Leslie Hatton, a professor of forensic software engineering, says that software defects "arise from many causes, including: a requirement might not be understood correctly; the physics could be wrong; there could be a simple typographical error in the code, such as a "+" instead of a "-" in a formula; the programmer may rely on a subtle feature of a programming language which is not defined properly, such as uninitialized variables; there may be numerical instabilities such as over-flow, under-flow, or rounding errors; or basic logic errors in the code. The list is very large. All are essentially human in one form or another but are exacerbated by the complexity of programming languages, the complexity of algorithms, and the sheer size of the computations." The article concludes by questioning "whether we can develop better tools for catching [software errors] before they do serious damage."
• In an IEEE Spectrum interview with software usability guru Jakob Nielsen, he says that "Microsoft’s new OS takes a giant step backward." He cites “hidden features, reduced discoverability, cognitive overhead from dual environments, and reduced power from a single-window UI and low information density” as contributing factors. "It … stems from their big mistake, which is to try to have … a single [window approach to all software] … because a single window works perfectly on a phone." This is essentially another vote for deciding that the user interface is the root of all evil."
One can hardly deny that software developed without a clear idea of who will use it and what they try to do with it is likely to disappoint. It is equally hard to deny that a poor user interface may create an obstacle to use even if the underlying software is able to perform some useful function. Several of the commentators observe that these points have been long recognized — but mere awareness does not seem to have had a noticeable effect. As hard as developers have worked to do better, they have consistently failed.
The root causes must lie elsewhere. The writer Moya K. Mason, in "What Can We Learn from the Rest of the World? A Look at International Electronic Health Record Best Practices," observed that "In reality, very few EHR systems are installed and functioning around the world, if we exclude those used primarily for billing and ordering prescriptions." The first barrier she identifies is the variety of information types that medical records contain (i.e. images, unstructured text, numeric data) structured in a combination of time-oriented, source-oriented, and problem-oriented ways — not to mention the often ambiguous or approximate nature of the information, qualities that prove to be very difficult to capture or represent in the typical computer system.
Clem McDonald summed it up well in 1997 [^J Am Med Inform Assoc. 1997;4:213–221]: "For the ultimate medical records we have to solve two grand challenges: the efficient capture of physician-gathered information—some of it in a computer-understandable format—and the identification of a minimum but affordable set of variables needed to assess quality and outcomes of care."
To this I would add that when variables are collected, they must include enough qualifiers to allow their proper inclusion in or exclusion from the denominator of any quality or outcome measure. Without that, the data are meaningless and the systems that collect the data, no matter how expensive, are worthless.