Here are some additional thoughts on whether your medical practice's EHR should be located in the cloud or on the ground.
In response to last week's blog, Tom emrTECH posted the following comment: “Data location of cloud providers will become key here. Vendors will have multiple data centers and thus reduce the distance of the pipe [and] increase your speed between the cloud and your browser. This is an easy infrastructure fix compared to the downsides of software; upgrades (queue headaches, data loss, hair pulling, etc.)”
Tom makes two valid and important points: 1) Cloud vendors can choose to improve performance in a variety of ways including by housing your data in a data center in your geographic vicinity; and 2) there are drawbacks to and headaches associated with deploying EHR in your own office.
Neither of these points is at odds with my basic message:
• Cloud computing is a new name for an old idea. Don't choose the cloud because it is the latest, greatest thing. Choose it if it makes sense for you.
• Software developers have ridden the coattails of Moore's Law and have gotten away with making applications unnecessarily bloated and computationally intensive. Evidence that web developers are equally guilty can be seen every day as your favorite sites load ever more slowly.
• Usable network throughput rates cannot and have not followed Moore's Law (doubling every two years). They are faster now, but with that speed comes increased recurring cost. Poorly written or poorly behaving cloud software can easily overwhelm the capacity of the network connection that you have today. If that happens, the fact that it may be faster next year won't help you stay in business until next year.
My concerns are my own and my conclusions are, of course, nothing more than an educated guess. I've spent many years getting educated - experience is what you get when you don't get what you want - but, as my daddy and my broker have told me, past performance is no guarantee of future performance so, I have the following additional and slightly repetitive observations:
• Until now, the maximum effective throughput of network technology and the Internet has not doubled every two years as has the speed of computer chips. That could change.
• Developers, often working with the latest workstations and test systems that are connected by high-speed local LANs, have been increasing the complexity software at a rate that has essentially consumed all of the speed improvements produced by Moore's Law. This leaves the average user with applications that are not functionally much faster now than they were 20 years ago. This could change.
• Even if the speeds of the Internet backbone increase dramatically and cloud vendors employ strategies that move data closer to users, thereby minimizing the number of hops that a data packet must traverse, the fact is that the last-mile, the connection between the user and the Internet backbone, is in most cases not using that super-high-speed technology. Last-mile connections are often a 2Mbps to 20 Mbps cable modem or a 0.5 Mbps to 2.0 Mbps DSL line. If one is fortunate enough to live in the right service area and is willing to pay more, it may be possible to obtain a fiber-optic connection with speeds up to 150 Mbps to 300 Mbps, but the typical fiber-optic customer can expect speeds from 3 Mbps to 75 Mbps. Some cable services claim to offer speeds as high as 50 Mbps. Of course as they say, actual performance may vary and it is common for customers to realize effective speeds that are only 10 percent to 20 percent of the advertised maxima. I personally use an Internet service that advertises 10Mbps to 15 Mbps download speeds. In practice, during the quietest time of day, the speed I actually get is 5.042 Mbps.
• Remember too that this last-mile connection is generally shared by all of the machines that you have working simultaneously and, depending on the technology, perhaps with other nearby customers as well. If, on the other hand, you were deployed locally on a 1 Gigabit network using Ethernet switches and multiple network adapters in your server, each workstation could achieve data rates approaching 1 Gbps.
Find out more about Daniel Essin and our other Practice Notes bloggers.