July 15, 2024

Health Benefit

Healthy is Rich, Today's Best Investment

Artificial intelligence in healthcare: defining the most common terms

8 min read

As healthcare organizations collect more and more digital health data, transforming that information to generate actionable insights has become crucial.

Artificial intelligence (AI) has the potential to significantly bolster these efforts, so much so that health systems are prioritizing AI initiatives this year. Additionally, industry leaders are recommending that healthcare organizations stay on top of AI governance, transparency, and collaboration moving forward.

But to effectively harness AI, healthcare stakeholders need to successfully navigate an ever-changing landscape with rapidly evolving terminology and best practices.

In this primer, HealthITAnalytics will explore some of the most common terms and concepts stakeholders must understand to successfully utilize healthcare AI.

ARTIFICIAL INTELLIGENCE

To understand health AI, one must have a basic understanding of data analytics in healthcare. At its core, data analytics aims to extract useful information and insights from various data points or sources. In healthcare, information for analytics is typically collected from sources like electronic health records (EHRs), claims data, and peer-reviewed clinical research.

Analytics efforts often aim to help health systems meet a key strategic goal, such as improving patient outcomes, enhancing chronic disease management, advancing precision medicine, or guiding population health management.

However, these initiatives require analyzing vast amounts of data, which is often time- and resource-intensive. AI presents a promising solution to streamline the healthcare analytics process.

The American Medical Association (AMA) indicates that AI “broadly refers to the ability of computers to perform tasks that are typically associated with a rational human being — a quality that enables an entity to function appropriately and with foresight in its environment.”

However, the AMA favors an alternative conceptualization of AI that the organization calls “augmented intelligence.” Augmented intelligence focuses on the assistive role of AI in healthcare and underscores that the technology can enhance, rather than replace, human intelligence.

AI tools are driven by algorithms, which act as ‘instructions’ that a computer follows to perform a computation or solve a problem. Using the AMA’s conceptualizations of AI and augmented intelligence, algorithms leveraged in healthcare can be characterized as computational methods that support clinicians’ capabilities and decision-making.

Generally, there are multiple ‘types’ of AI that can be classified in various ways: IBM broadly categorizes these tools based on their capabilities and functionalities — which covers a plethora of realized and theoretical AI classes and potential applications.

Much of the conversation around AI in healthcare is centered around currently realized AI — tools that exist for practical applications today or in the very near future. Thus, the AMA categorizes AI terminology into two camps: terms that describe how an AI works and those that describe what the AI does.

AI tools can work by leveraging predefined logic or ‘rules-based learning,’ to understand patterns in data via ‘machine learning,’ or using ‘neural networks’ to simulate the human brain and generate insights through ‘deep learning.’

In terms of functionality, AI models can use these learning approaches to engage in ‘computer vision,’ a process for deriving information from images and videos; ‘natural language processing’ to derive insights from text; and ‘generative AI’ to create content.

Further, AI models can be classified as either ‘explainable’ — meaning that users have some insight into the “how” and “why” of an AI’s decision-making — or ‘black box,’ a phenomenon in which the tool’s decision-making process is hidden from users.

Currently, all AI models are considered narrow or weak AI, tools designed to perform specific tasks within certain parameters. Artificial general intelligence (AGI), or strong AI, is a theoretical system under which an AI model could be applied to any task.

MACHINE LEARNING

Machine learning (ML) is a subset of AI in which algorithms learn from patterns in data without being explicitly trained. Often, ML tools are used to make predictions about potential future outcomes.

Unlike rules-based AI, ML techniques can use increased exposure to large, novel datasets to learn and improve their own performance. There are three main categories of ML based on task type: supervised, unsupervised, and reinforcement learning.

In supervised learning, algorithms are trained on ‘labeled data’ — data inputs associated with corresponding outputs — to identify specific patterns, which helps the tool make accurate predictions when presented with new data.

Unsupervised learning uses unlabeled data to train algorithms to discover and flag unknown patterns and relationships among data points.

Semi-supervised machine learning relies on a mix of supervised and unsupervised learning approaches during training.

Reinforcement learning relies on a feedback loop for algorithm training. This type of ML algorithm is given labeled data inputs, which it can use to take various actions, such as making a prediction, to generate an output. If the algorithm’s action and output align with the programmer’s goals, its behavior is “reinforced” with a reward.

In this way, algorithms developed using reinforcement techniques generate data, interact with their environment, and learn a series of actions to achieve a desired result.

These approaches to pattern recognition make ML particularly useful in healthcare applications like medical imaging and clinical decision support.

DEEP LEARNING

Deep learning (DL) is a subset of machine learning used to analyze data to mimic how humans process information. DL algorithms rely on artificial neural networks (ANNs) to imitate the brain’s neural pathways.

ANNs utilize a layered algorithmic architecture, allowing insights to be derived from how data are filtered through each layer and how those layers interact. This enables deep learning tools to extract more complex patterns from data than their simpler AI- and ML-based counterparts.

Like machine learning models, deep learning algorithms can be supervised, unsupervised, or somewhere in between. There are four main types of deep learning used in healthcare: deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).

DNNs are a type of ANN with a greater depth of layers. The ‘deeper’ the DNN, the more data translation and analysis tasks can be performed to refine the model’s output.

CNNs are a type of DNN that is specifically applicable to visual data. With a CNN, users can evaluate and extract features from images to enhance image classification.

RNNs are a type of ANN that relies on temporal or sequential data to generate insights. These networks are unique in that, where other ANNs’ inputs and outputs remain independent of one another, RNNs utilize information from previous layers’ inputs to influence later inputs and outputs.

RNNs are commonly used to address challenges related to natural language processing, language translation, image recognition, and speech captioning. In healthcare, RNNs have the potential to bolster applications like clinical trial cohort selection.

GANs utilize multiple neural networks to create synthetic data instead of real-world data. Like other types of generative AI, GANs are popular for voice, video, and image generation. GANs can generate synthetic medical images to train diagnostic and predictive analytics-based tools.

Recently, deep learning technology has shown promise in improving the diagnostic pathway for brain tumors.

COGNITIVE COMPUTING

With their focus on imitating the human brain, deep learning and ANNs are similar but distinct from another analytics approach: cognitive computing.

The term typically refers to systems that simulate human reasoning and thought processes to augment human cognition. Cognitive computing tools can help aid decision-making and assist humans in solving complex problems by parsing through vast amounts of data and combining information from various sources to suggest solutions.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, ‘remember’ previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information.

To achieve this, these tools use self-learning frameworks, ML, DL, natural language processing, speech and object recognition, sentiment analysis, and robotics to provide real-time analyses for users.

Cognitive computing’s focus on supplementing human decision-making power makes it promising for various healthcare use cases, including patient record summarization and acting as a medical assistant to clinicians.

NATURAL LANGUAGE PROCESSING

Natural language processing (NLP) is a branch of AI concerned with how computers process, understand, and manipulate human language in verbal and written forms.

Using techniques like ML and text mining, NLP is often used to convert unstructured language into a structured format for analysis, translating from one language to another, summarizing information, or answering a user’s queries.

There are also two subsets of NLP: natural language understanding (NLU) and natural language generation (NLG).

NLU is concerned with computer reading comprehension, focusing heavily on determining the meaning of a piece of text. These tools use the grammatical structure and the intended meaning of a sentence — syntax and semantics, respectively — to help establish a structure for how the computer should understand the relationship between words and phrases to accurately capture the nuances of human language.

Conversely, NLG is used to help computers write human-like responses. These tools combine NLP analysis with rules from the output language, like syntax, lexicons, semantics, and morphology, to choose how to appropriately phrase a response when prompted. NLG drives generative AI technologies like OpenAI’s ChatGPT.

In healthcare, NLP can sift through unstructured data, such as EHRs, to support a host of use cases. To date, the approach has supported the development of a patient-facing chatbot, helped detect bias in opioid misuse classifiers, and flagged contributing factors to patient safety events.

GENERATIVE AI

McKinsey & Company describes generative AI (genAI) as “algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.”

GenAI tools take a prompt provided by the user via text, images, videos, or other machine-readable inputs and use that prompt to generate new content. Generative AI models are trained on vast datasets to generate realistic responses to users’ prompts.

GenAI tools typically rely on other AI approaches, like NLP and machine learning, to generate pieces of content that reflect the characteristics of the model’s training data. There are multiple types of generative AI, including large language models (LLMs), GANs, RNNs, variational autoencoders (VAEs), autoregressive models, and transformer models.

Since ChatGPT’s release in November 2022, genAI has garnered significant attention from stakeholders across industries, including healthcare. The technology has demonstrated significant potential for automating certain administrative tasks: EHR vendors are using generative AI to streamline clinical workflows, health systems are pursuing the technology to optimize revenue cycle management, and payers are investigating how genAI can improve member experience. On the clinical side, researchers are also assessing how genAI could improve healthcare-associated infection (HAI) surveillance programs.

Despite the excitement around genAI, healthcare stakeholders should be aware that generative AI can exhibit bias, like other advanced analytics tools. Additionally, genAI models can ‘hallucinate’ by perceiving patterns that are imperceptible to humans or nonexistent, leading the tools to generate nonsensical, inaccurate, or false outputs.

link

Leave a Reply

Your email address will not be published. Required fields are marked *