Trust is an essential element of medicine. Trust is also an important consideration when dealing with computers, data, and information. You are forced to make trust decisions about practitioners of whom you have little personal knowledge and computer systems designed and implemented by individuals who have their own agendas, who may not understand medical information science, and who have never practiced medicine. If you choose to trust data or an information source that is, in fact, untrustworthy, you will end up misinformed and may make inappropriate decisions. The same applies if you fail to trust sources that warrant your trust. Making trust decisions in the medical setting can be difficult.
It's good to know that trust is important, but what is trust? Wikipedia says that, in a social context, trust refers to a situation in which one party is willing to rely on the actions of another party. A National Research Council committee devoted to computer security concluded that "trust is a belief that a system meets its specifications."
In my opinion, neither of these definitions is terribly useful. They merely indicate that there are situations in which people trust something but neither accounts for how or why the person or thing warrants that trust. Prompted by the NRC book, I published a paper noting the subjective nature of the extant definitions and proposing a formal, quantitative method for assessing/assigning trust. Trust, I said, is a function of five elements: identity, reputation, capability, stake, and benefit (or risk, if negative).
Identity is traceable and verifiable. It is difficult to trust someone or something if it is masquerading as something else. Reputation is enhanced when others have interacted with the person or thing in the past and got what they expected without nasty surprises. Capability (or knowledge) is self-explanatory. Who would trust a doctor to perform brain surgery that had neither studied medicine nor done the procedure before under supervision? Stake refers to the person or thing having an interest in a "good" outcome and being free from conflicts of interest. Those elements can be combined to form a "trust equation," but need another element as a coefficient — benefit/risk. If the potential risk or benefit is small, the consequences of getting one or more of the factors wrong is minimal, if large and great care is called for.
Here are some questions to think about in terms of trust.
• When your EHR reports a patient's diagnosis by ICD code, should you trust it? Who assigned the code? For what purpose?
• When you search for information about a patient in your EHR, should you trust what is displayed? Is it everything about the patient? Is it all about your patient, not some other patient?
• When you read a note in the chart, does it matter who wrote it? Do you trust that they did not blindly accept some boilerplate or template? Did not copy/paste a prior note without updating it?
• Can you trust the "data" stored in an EHR enough to ignore the narrative notes or to sloppy when creating your own?
• Will other practitioners trust your work?
• Why should you trust any commentator who asserts that today's EHRs have fatal flaws? This conclusion flies in the face of everything that government agencies and EHR vendors are saying.
The trust equation provides a framework for answering questions like these. Alternatives include opinion, emotion, and blind faith; none of which lead to justifiable or intellectually satisfying decisions. The risks that result from trusting, or failing to trust, inappropriately can be very costly. By comparison, the risk of being cautious and skeptical is small.
Next, I will delve into the merits of narrative notes and then discuss why, alone, they are not enough.