In June of 2020, given the latest bolus of articles re: “technology” applications in healthcare, I ruminated about the deployment (and risk) of artificial intelligence (AI) and machine learning (ML) technologies in the space. The utilization of technology to assist in care delivery, whether off-the-shelf solutions or custom designed AI products to empower decision making/care management, is necessary but should be approached with caution. As I’d noted, and continue to believe, AI and ML are constructs that require a bit of near-term expectation management in healthcare but do have application when deployed with solution-driven clarity. As suggested, while the efficacy and value of AI and ML will improve with time, they are not “the” answer that will remedy the myriad care and cost delivery questions surrounding healthcare in the United States. Owing to space constraints and the fact I am not an AI guru, this column is an overly simplistic noodling of recent AI foibles outside of healthcare that tell a larger story. As in 2020, to level set, I am not an AI programmer, don’t code in Python, and have never built a ML algorithm. My background is 30-plus years of practical experience in healthcare management and delivery dealing with information technology (IT) systems and applications in that time, such as culling quality data and outcomes from electronic medical record (EMR) systems and deploying rudimentary analytics.
An insightful and cautionary tale crafted by Will Douglas Heaven (November 18, 2022 – MIT Technology Review [Why Meta’s latest large language model only survived three days online | MIT Technology Review]) pointed out the folly of unbridled big data aggregation. In short, Mr. Heaven noted that in mid-November Meta (a.k.a. Facebook) deployed a “large language model” (LLM) to assist scientists. This product crunched loads and loads of information and data. Yet within 3 days of unfurling this silver bullet, Meta pulled the plug on Galactica (the “product”) with great noise and fanfare. The public demo was yanked because Galactica delivered noise, fakery as fact, and biased output. It seems that part of the issue was the LLM “engine” was unable to parse between fact, fiction, and meaningful data while yielding sub-optimal output. Mr. Heaven’s article suggests that not only was much of Galactica’s output inaccurate, “…it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space.” So, imagine this in healthcare treatment protocols for, say, cancer.
While Galactica’s hiccup does not apply to healthcare, my overly simplistic takeaway from this article is the idea that data mashed together does not filter into actionable outputs but instead mucks up the product leaving incongruity, lack of cohesive output, and, in many cases, errors that otherwise should never be made. And in healthcare, that’s dangerous. Simply put, sans structured “engine” build and defined outputs, Galactica seemed destined for the (near-term) AI heap.
You’ve got to break a few eggs. I get that. And that’s not to suggest that Galactica’s short comings portend for bad things. I’m thankful for those who continue to push the bounds of IT and what can be delivered. My hope is that those ventures will yield outcomes for healthcare delivering on both quality and cost.
As I’ve suggested, and it’s not rocket science, garbage in and garbage out; it’s (almost) that simple. In the healthcare space, that alone should be reason enough for data scientists to partner with healthcare experts to understand what the outcomes should be etc. so that the procud
Is that to say that AI, ML, and LLM will not play a role in the future of healthcare? Certainly not. I believe they will play a significant role. However, short-term challenges will continue as robust IT offerings are unveiled. AI, ML, and other cutting-edge technologies are needed to advance the delivery and coordination of care, squeeze costs and redundancy out of the healthcare system, and help ensure repeatable quality outcomes. But few technologies are perfect, and most require time to germinate as they grow in use and scalability.
AI will continue to grow in use and value in healthcare. Whether it’s in predictive analytics for disease states, cash flow on the revenue cycle side of the business, or value-based care initiatives, AI is here to stay. However, success factors for the growth of AI in healthcare may include, but not be limited to:
· a sound, defined business case (eat the elephant in small bites)
· clear communications of expected outputs between SMEs and data scientists
· sound, clean data
· model scalability
The future for AI in healthcare looks bright. As I said in 2020, its application is a marathon, not a sprint.