• Industry News
  • Law & Malpractice
  • Coding & Documentation
  • Practice Management
  • Finance
  • Technology
  • Patient Engagement & Communications
  • Billing & Collections
  • Staffing & Salary

AI and healthcare: Key considerations

Blog
Article

What if the algorithm was manipulated or rendered an inaccurate result in a patient care situation?

AI robot hand | © Production Perig - stock.adobe.com

© Production Perig - stock.adobe.com

According to Statista, the global artificial intelligence (AI) healthcare market, valued at $11 billion in 2021, is projected to be worth almost $188 billion in 2030. But, what is AI? According to the National Institute for Standards and Technology’s (NIST) definition, which is adopted from the American National Standard Dictionary of Information Technology (ANSI INCITS 172-220 (R2007)), there are two definitions:

  • A branch of computer science devoted to developing data processing systems that performs functions normally associated with human intelligence, such as reasoning, learning, and self-improvement.
  • The capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.

These “capabilities” of emulating human intelligence are accomplished through the use of algorithms – “[a] clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result.” (NIST SP 800-107). Because algorithms are mathematical in nature, they can be manipulated, lack adequate data, and/or render inaccurate results. For example, although not a healthcare case, United States of America v. Meta Platforms, Inc., f/k/a Facebook, Inc. (S.D. N.Y.), is instructive on the notion of “algorithmic discrimination.” As the U.S. Department of Justice announced in its settlement with Meta,

“This development marks a pivotal step in the Justice Department’s efforts to hold Meta accountable for unlawful algorithmic bias and discriminatory ad delivery on its platforms,” said Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division. “The Justice Department will continue to hold Meta accountable by ensuring the Variance Reduction System addresses and eliminates discriminatory delivery of advertisements on its platforms. Federal monitoring of Meta should send a strong signal to other tech companies that they too will be held accountable for failing to address algorithmic discrimination that runs afoul of our civil rights laws.”

What if the algorithm was manipulated or rendered an inaccurate result in a patient care situation? First, is AI HIPAA complaint? According to a U.S. Department of Health and Human Services (HHS) report, Sharing and Utilizing Health Data for AI Applications and recent July 2023 letters sent by the Federal Trade Commission (FTC) and HHS to various companies, HIPAA compliance is an issue in relation to data mining and data tracking without a patient and/or consumer’s knowledge and consent. Additionally, if an algorithm is inaccurate because of the data and/or manipulated to obtain certain results, there could be series threats to patient care, which result in adverse patient outcomes, including death. These considerations were highlighted in the U.S. Food and Drug Administration’s (FDA) Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products – Discussion Paper and Request for Feedback - acknowledging the significant potential that AI/ML holds for improving drug development, while highlighting potential harms. In sum, ensuring that AI systems are trustworthy.

In September 2021, HHS published Trustworthy AI (TAI) Playbook, which is not a formal policy or standard nor an exhaustive guide to building and deploying AI solutions. It does focus on core themes throughout a myriad of government agencies – “legal, effective, ethical, safe, and otherwise trustworthy.” (88 Fed. Reg. 22433 (Apr. 13, 2023)). These terms all factor into the definition of “trustworthy AI.”

What this means for unethical and/or untrustworthy AI and the company that utilize and deploy them is increased government enforcement actions and potential liability under the False Claims Act. See Doe v. eviCore Healthcare MSI, LLC, No. 22-530-CV, 2023 WL 2249577 (2d Cir. Feb. 28, 2023). The Second Circuit was faced with the novel legal theory that use of flawed artificial intelligence systems can constitute a “worthless service” for purposes of FCA liability.

The reality is that AI is still in its infancy despite the potential market. Covered entities and business associates alike, including those covered under broader state law definitions (e.g., Texas HB 300) should implement an AI policy and procedure for evaluation of new applications, augment training, and ensure that trustworthy standards are being met. This could mean the difference between upcoding, inaccurate information in a medical chart, and/or a medical device being calibrated or set improperly which in turn may result in patient death – all of which lead to increased False Claims Act liability. Moreover, every government agency has indicated that human review of AI is necessary. Hence, humans and machines need to learn to cohabitate, as it is not

Rachel V. Rose, JD, MBA, advises clients on compliance, transactions, government administrative actions, and litigation involving healthcare, cybersecurity, corporate and securities law, as well as False Claims Act and Dodd-Frank whistleblower cases.

Related Videos
MGMA comments on automation of prior authorizations
Ike Devji, JD and Anthony Williams discuss wealth management issues
Erin Jospe, MD gives expert advice
A group of experts discuss eLearning
Three experts discuss eating disorders
Ike Devji, JD and Anthony Williams discuss wealth management issues
Navaneeth Nair gives expert advice
Navaneeth Nair gives expert advice
Navaneeth Nair gives expert advice
Matt Michaela gives expert advice
© 2024 MJH Life Sciences

All rights reserved.