CMS Wants to See How AI Can Disrupt Healthcare

CMS Wants to See How AI Can Disrupt Healthcare

The agency is launching an AI Health Outcomes Challenge to find ways of predicting health outcomes.

Saying it’s “time for a disruption” in healthcare, the Centers for Medicare and Medicaid Services wants to join the vanguard of artificial intelligence in the medical field, starting by getting a comprehensive idea of what AI can and can’t do improve health care. And it doesn’t want to limit its input just to health practitioners.

CMS’ Innovation Center has announced the AI Health Outcomes Challenge for early 2019, with the goal of using new AI and analytics methods to better predict health outcomes and improve healthcare delivery. With the challenge, CMS wants to look beyond existing uses of the technology, instead flooding the zone with AI possibilities. It also wants input from outside familiar terrain, inviting all sectors — not just those involved in healthcare — to take part, including tech companies, academic institutions, and scientists from across the AI landscape to join clinicians and patients in the effort.

“It’s not enough to build on the technology that currently exists. We need to ask bold questions, like how artificial intelligence (AI) can transform and disrupt how we think about healthcare delivery,” CMS said in announcing the challenge.

Although details of the challenge are still being formed, CMS said it is looking to apply the technology to all health care services, incorporating AI into new payment and service delivery models as well as medical care. “The goal is to help the healthcare system deliver the right care, at the right time, in the right place, and by the right people,” the agency said in its announcement.

Making the Rounds

The wide-ranging scope of CMS’ challenge reflects the impact that AI is already having in the field, with its ability to crunch massive amounts of data, recognize patterns, predict likely outcomes and draw conclusions. Already, AI systems have turned in some impressive results in detecting signs of cancer, diagnosing heart disease, and sequencing genetic data, to name a few medical applications. Earlier this year, the Food and Drug Administration began approving AI-powered medical tools, and announced plans to accelerate development and clinical trials of AI devices.

“Artificial intelligence, particularly efforts to use machine learning . . . holds enormous promise for the future of medicine, and we’re actively developing regulatory framework to promote innovation and the use of AI-based technologies,” FDA Commissioner Scott Gottlieb said earlier this year.

AI is also being used in a variety of patient monitoring applications, as well as in increasing hospital efficiency. Hospitals have tested it to take over data collection duties from ICU nurses in order to free up their time for patient care, monitor high-risk patients for early signs of sepsis, and track patients’ movements to send alerts if they’re at risk of falling. It’s even been used to monitor the effects of light and noise levels on infants, with an eye toward improving their care by improving their environment. Scheduling the optimal times for surgery and reducing wait times are among other uses. AI is also seen as an important emerging tool for diagnosing mental health issues.

But for all of its promise, AI isn’t perfect. Along with exploring innovative, potentially game-changing uses for the technology, CMS, FDA and everyone else in the field say they need to keep a tight rein on its powers, particularly with regard to autonomy. As Gottlieb said, “. . . even as we cross different tiers in innovation, we must make sure these novel technologies can deliver benefits to patients by meeting our standards for safety and effectiveness.”

Even IBM’s Watson, which has been a pioneer in medical research and treatment, has slipped up. According to a July report in Stat, Watson for Oncology gave faulty treatment recommendations for cancer patients while being tested at Sloan Kettering Cancer Center (although IBM said the recommendations were made in hypothetical, rather than real, cases). A study released this month by the Icahn School of Medicine at Mount Sinai in New York found blind spots in AI’s ability to analyze X-rays when dealing with images from more than one hospital’s system.

Straight Talk

As AI systems become more powerful, and as medical practitioners rely on them more and more, the risks of mistakes will become a bigger concern.

One area CMS could explore is one of the things AI can’t do at the moment: explain itself. AI, whether in the form of machine learning, deep learning or neural network systems, can perform amazingly complex reasoning processes to get to a conclusion, but if a human operator asks it how it reached that conclusion, the machine can’t explain it in human terms.

It’s the kind of thing that can lead to a lack of trust between humans and machines — and prevent the human-machine teaming plans of government and other sectors — particularly when it concerns medical diagnoses and treatment recommendations.

That explanation gap could even get wider as systems close in on what the Defense Advanced Research Projects Agency calls “third wave” AI technology. Currently, almost all AI systems follow clearly defined rules (first wave systems, such as those playing chess or diagnosing cancer cells) or can adapt to some changes but require extensive training (second wave, such as those used for natural language processing, image recognition or driverless cars).

Third wave AI systems will work more intuitively, based on fewer examples and less training, and be able to apply themselves to problems they might not have been trained for. The more machines are able to think on their own, the more important it will be for them to explain their thinking.

Researchers are trying to get behind AI’s thought processes in a number of ways. OpenAI, a non-profit research institute backed by the likes of Microsoft, Amazon, Elon Musk and others, is studying AI’s thinking by having systems argue with each other, thus revealing how they reached a decision.

DARPA is trying to develop a way for humans and machines to engage in some plain speaking with its Explainable Artificial Intelligence (XAI) program. While DARPA’s focus is on warfighters’ ability to converse with the machines — saying it’s “essential” to future military operations — that kind of rapport would seem to be equally vital to doctors who need to trust an AI system’s treatment recommendation or mental health diagnosis.