Opportunities to standardize were missed; while technically straightforward they were not considered important or were perhaps operationally too expensive. Even today, as development begins on new systems, there are decisions to be made.
For the previous post in this series, click here.
Every EHR has limitations. Some of these can be traced to decisions made early in the design of the operating system, programming language, and/or the application itself. Just as in medicine, a greater understanding of the anatomy and physiology of a computer system’s components can lead to a better understanding of software pathology. The aim of this series is to provide a short course in the subject. It's a bit technical, but easier than fluid and electrolyte balance.
General-purpose computers use a small program to interpret the user’s input and translate that input into commands that the computer can execute. The input may be either written in a programming language or a command language. The Command Line Interpreter (CLI) has proven to be a more capable approach and has spawned a variety of specialized variants. For example, enabling a CLI to interpret database queries makes it possible for individuals who are not programmers, but who have learned the query language, to access and manipulate data in databases.
For a data query language to be useful, certain preconditions must be fulfilled, the most important one being a data dictionary. This creates an incentive to implement a data dictionary which is, in itself, a small database that stores information that describes each data table and each data element. Computer scientists refer to data about data as “metadata.” The records in a data dictionary have an identical structure and it is this predictability that makes query languages possible. By accessing the dictionary, a query language can discover the properties of any tables to which the query refers.
Language interpreter-based applications, on the other hand, have not incentivized the creation of data dictionaries in the same way. Having a data dictionary does not alter the fact that every query involves writing a program. The programmers already "know" the structure of each data set. Why go to extra effort of building what is essentially a separate database to hold the metadata? In order to use the metadata, it would require that someone take the time to write a program that would write the query programs; it would, in essence, be a CLI of sorts. It turns out to be quite difficult, given the creative ways in which data gets structured when there is no pre-existing convention (imposed by the data dictionary) to keep the developers on the straight-and-narrow.
This explains why language interpreter-based systems do not foster standardization, modularization, and reuse. Lacking a CLI, they do not provide an incentive to create the metadata. Lacking metadata, there is no easy way to share and reuse applications and data (or at least they didn't at the time when the die was cast.) In the case of MUMPS, each site developing applications was forced to invent its own conventions.
To be fair, early development of the VA software centered on the VA File Manager (FileMan) which implemented a rudimentary data dictionary and query facility but remember, a query involves writing a program. Since the typical user was/is not a programmer, FileMan included a query facility; a program that prompted the user with a long and tedious series of questions about what data to retrieve and how to display it. It did not allow things that are done routinely with structured query language (SQL) today such as running a similar query on every table in the database by creating a script with a bunch of almost identical queries or a loop that would substitute a different table name into the query on each iteration of the loop.
I may be a bit harsh in my criticism of systems based on language interpreters but there is no doubt in my mind that the selective advantage (in the Darwinian sense) conferred on a system by using a command interpreter is one reason that it is rare to find a MUMPS based system outside of the healthcare niche. In addition, the special purpose artificial intelligence (AI) machines are virtually extinct, having been replaced by software solutions running on Unix.
Though not the only culprits, systems based on language interpreters pose an impediment to interoperability. Opportunities to standardize were missed; while technically straightforward they were not considered important or were perhaps operationally too expensive. Even today, as development begins on new systems, there are decisions to be made. A suboptimal decision taken because of expediency, the pressure of a deadline or a lack of resources (real or perceived), can easily be the one that permanently constrains a product or dooms it to oblivion.
The next installment will address another facet of interacting with a computer.
Reference of the week: Overview of MUMPS, Wikipedia.