Computer scientists have not been asleep at the stick for the last 50 years. Many recognize that the techniques and tools that were developed by the pioneers had and have deficiencies both in concept and execution.
This is not a criticism. They did pretty well given the novelty of it all and the rapid rate at which it evolved. Nevertheless, it takes some experience before one can look at what was done before and imagine something different. After all: Experience is what you get when you don’t get what you want. You’ll never get any experience if you never do anything that you perceive as being suboptimal. The big question is: What happens as a result of having all this experience? The whole idea of evidence-based medicine is to use validated experience to change behavior.
In computer science, as in medicine, investigators are using experience to guide further research and development. Experimental work has been done on basic programming languages, algorithms, how best to represent real-world data in computer storage, and artificial intelligence. One especially interesting technique that has emerged is the creation of “domain-specific languages” (DSL), tailored to the needs of a particular subject matter. Some DSLs have gained wide notoriety and acceptance.
Examples include HTML, Mathematica, and Maxima for symbolic mathematics, spreadsheet formulas and macros and SQL for relational database queries. Many other DSLs have been developed but are not part of the awareness of the technical community. They quietly do whatever their developers intended but have no impact on the craft of programming in general.
Most DSLs share one property — they are designed to "sit on top" a general-purpose language like C or Java. As such, no matter how effective they may be in making it easier to express the computing needs of a specific domain, they rarely make any attempt to mask or improve upon the data types and other fundamental properties of the core language. As long as those native core properties do not interfere with the desired functionality, this limitation will pass unnoticed — or will it?
Brian Fitzgerald, a noted computer scientist in Ireland observes that “[a]lthough barely 50 years old, the software domain has already endured one well-documented crisis [that he calls Software Crisis 1.0] …with software taking longer and costing more to develop than estimated, and not working very well when eventually delivered. …The past 50 years have also seen enormous advances in hardware capability …Unfortunately, we haven’t seen similar advances in software development capability, giving rise to Software Crisis 2.0. Individual efforts seek to address this crisis — data analytics, parallel processing, new development methods, cloud services — but they’re disjointed and not likely to deliver the software development capacity needed.”
In other words, most of the work that has been by computer scientist researchers, as well as the work that could have been done but was not, has had little impact on the techniques and tools used by mainstream developers — including those who built today’s EHRs.
Software Crisis 1.0 did not begin to afflict EHR until about 1980 — there were no EHRs to speak of prior to that. Progress in medical computing has always seemed to lag other computing domains by about 20 years. This suggests that, for medicine, Software Crisis 1.0 has a while longer to run. The real question is whether medicine will even get the "opportunity" to experience Software Crisis 2.0 or whether the stagnating influence of externally mandated requirements will keep it frozen in the past. If there was ever a domain that needed new software development capabilities and a new domain-specific programming language, it is medicine!
Find out more about Dan Essin and our other Practice Notes bloggers.