Greater regulatory and policy clarity may better optimize clinician use of artificial intelligence in treating patients with cancer.
In 2023, Travis Osterman, DO, MS, FAMIA, FASCO was among 3 executives tasked with performing an AI inventory at VUMC, which found 131 unique instances of AI being implemented or already implemented at the institution.
Although use cases for AI tools in oncology are increasing, important regulatory considerations must be made when implementing this technology. In an oral presentation at the NCCN Policy Summit, Travis Osterman, DO, MS, FAMIA, FASCO, discussed the implementation of artificial intelligence (AI) into clinical practice.1 Initially, he outlined his use of these tools in his own institution, before outlining regulatory updates pertaining to their use. Furthermore, he outlined ongoing challenges facing the implementation of AI into clinical practice before concluding with opportunities to implement policies for the use of this technology.
Osterman began by suggesting that AI encompasses more than large language models (LLMs), pointing to natural language processing (NLP) and data sciences/analytics as further applications of AI. Furthermore, he outlined a framework for the cancer journey based on a review he co-authored with researchers at Mayo Clinic, which described the current state of cancer care, as well as opportunities to implement AI into clinical practice.2 This framework encompassed the entire cancer care continuum, including prevention and end-of-life care, as well as diagnosis, treatment, and survivorship.
Osterman is an associate vice president for Research Informatics at Vanderbilt University Medical Center (VUMC) and the director of Cancer Clinical Informatics, as well as an assistant professor in the Department of Biomedical Informatics in the Division of Hematology and Oncology at Vanderbilt-Ingram Cancer Center.
In 2023, Osterman was among 3 executives tasked with performing an AI inventory at VUMC, which found 131 unique instances of AI being implemented or already implemented at the institution.
“That number may be 10 [times] that today, and was probably an underestimate at that time,” he expressed in the presentation.
He further shared common themes that emerged during the AI inventory, which included improving clinical operations, professional development and recruitment, platforms and frameworks, as well as expanding data sets. Additionally, he introduced selected implementations for AI in Vanderbilt’s center, which included ambient scribe, surgical planning, infusion scheduling, and radiology critical alerts.
Regarding the use of ambient scribe, Osterman explained that audio is recorded during inpatient and outpatient interviews, which correctly attributes speech to individual speakers and summarizes information in a document akin to a clinical note.
“We have been implementing this for at least the last year to year and a half,” he explained. “As far as implementation goes, you know that the rollout is going well when you are trying on the IT side to slow it down, meaning that we have so many people clamoring to use this that we are struggling to get the licenses to roll that out.”
With surgical planning, Osterman highlighted the use of augmented reality to improve communication with his pathology team by one of his colleagues, Michael Topf, MD, MSCI, assistant professor of Otolaryngology-Head and Neck Surgery. In a press release from VUMC regarding the development of a head-mounted augmented reality system, Topf discussed his implementation of a protocol to create models of resected tumors for multi-disciplinary use.2
“We came up with a way to 3D scan a surgical specimen in real time in less than 10 minutes prior to processing and not interfere with all the other important things that are going on in the pathology lab,” said Topf.2 “Encouragingly, this is a widely transferable practice and would be applicable to most cancer surgeries, from orthopedic oncology to breast cancer.”
Additionally, regarding infusion scheduling, his institution has implemented a system to help augment scheduling for nurses to plan infusions better so they are not required “to be in 2 places at once” and infuse multiple patients at once, which more evenly distributes the workload and permits more proactive infusion scheduling. Finally, regarding radiology critical alerts, AI is being used to flag radiology reads that may pose a higher risk, permitting their expedited review, to help optimize radiology practice, considering increasing demands for radiologists.
Osterman then spoke about regulatory updates from the FDA regarding predetermined change control plans for medical devices (PCCP), from the Office of the National Coordinator for Health Information Technology (ONC) for decision support interventions (DCI), and the Centers for Medicare and Medicaid Services (CMS) for prior authorization.
Regarding PCCP, which the FDA offered final guidance on August 18, 2025, it aimed to outline guidance for controlling algorithm drift, which occurs naturally as models use data from populations that have changed over time, without the need for subsequent FDA approval.3 This guidance would help developers set up “guard-rails” on the model when submitting it for FDA approval, which would help pre-define change control parameters, preventing the need to seek additional approvals every time a model is re-calibrated.
For DSI, which the ONC set a base electronic health record (EHR) requirement on January 1, 2025, it was set to replace clinical decision support, and is divided into evidence-based DSI, encompassing rules and guidelines, as well as predictive DSI, which include regressions, NLP, and LLMs, among others. Evidence-based DSI is used to help clinicians better adhere to clinical guidelines, whereas predictive DSI is meant to provide a risk score for making outcome predictions. Required by the ONC, these DSI guidelines also expose “source attributes” which is meant to show transparency about how a predictive score was calculated on EHRs.
Additionally, pertaining to prior authorization, the CMS is requiring Fast Healthcare Interoperability Resources (FHIR)-based application programming interfaces (APIs) for prior authorization starting January 2026. These APIs are aimed at enabling patient applications to pull data from their health care system regarding authorizations and mandates transition between payers. This further allows healthcare system APIs to talk with payers.
Next, Osterman outlined current challenges in implementing AI into clinical practice, with key issues of AI reliance, discrepancies between state and federal regulations, and something called the “Medical Student Paradox,” among those covered. He initially highlighted a trend that showed that concept retention diminished following essay writing through the use of LLMs compared with web search or unassisted writing. Although he suggested that these results were broad in that they encompassed general essay writing, he further supplemented the idea that AI reliance may have negative impactions for clinical practice, by sharing findings from a study published in the Lancet Gastroenterology and Hepatology.4
“Lancet Gastroenterology showed this study, which is relevant to the cancer space, because they took … gastroenterologist doing colonoscopies, and they looked at the adenoma detection rate for endoscopists who use AI to help assist the location of adenomas routinely in their colonoscopy,” he said in the presentation. “They turned the AI off, and they looked at do the endoscopists do as well after we turn the AI off? They do worse when we turn the AI off, and they do worse than groups that do not use AI.They coined the term of de-skilling, that the use of AI to augment our health care practice may decrease your skill set over time, because you become increasingly reliant on AI.”
However, Osterman suggested that these findings may not prove that AI reliance is troublesome in all instances, pointing to clinical decision support (CDS), in which correcting mistakes made during spaced repetition contrasts with a system that prevents human error altogether.
Additionally, Osterman spotlighted a piece of legislation that will go into effect in October 2025 in Maryland, which mandates that health plans can only use AI as an assistive tool.5 Furthermore, this legislation requires the use of individual data, a quarterly review of performance, as well as an openness to audit and inspection regarding AI use in health plans.
In highlighting this piece of state legislation, Osterman suggested that similar mandates applied at a federal level may assist utilization management nationwide, as well as prevent the potential for 2 AI agents to argue about prior authorization.
“There has to be a person in the loop,” he explained. “This [legislation] was signed by the governor [and it said] ‘an AI algorithm or other software tool may not deny, delay, or modify health care service.’ [This] should not be a state-by-state issue… We have at least alluded to this concept of [an] arms race, where 2 sides may be using AI. This will continue to evolve and become more common.”
Regarding the “Medical Student Paradox,” he disputed the claim that predictive DSI with a diagnosis or treatment recommendation focus falls short of a “great medical student” equivalency. First, he outlined that regardless of the strength of these tools, license and malpractice risk still emerge for the treating oncologist. Then, he suggested that the cognitive burden of assessing such recommendations may exceed the limitations of such a model, which lacks the means to have effective positive predictive values despite showing sensitivity. Lastly, he covered the risk of balancing known-unknown with unknown-unknown, in which an AI system may be able to help when a clinician is aware that they need help, but a clinician unknown to a need for help will not benefit from the use of an AI system.
Lastly, Osterman covered policy opportunities for AI implementation in clinical practice, with proof of personhood, health information management, AI licensing, re-skilling, and data standards as topics for discussion.
Regarding proof of personhood, he spotlighted a piece of legislation in Denmark that would permit copyright law to give its citizens the right to demand digital forgeries of themselves on social media. Relating it to clinical practice, he suggested that this concept of proof of personhood could be applied when making assurances that a human is involved in the loop of communication.
“This problem of proof of personhood continues to be a problem that will bubble up more, and it’s necessary that we solve this problem,” Osterman said in the presentation. “There’s a concept of 1 person, 1 digital identity, and ideally, we can implement that where there’s no ability for you to hold 2 different identities, and that a bot cannot easily impersonate that human identity.”
He then discussed considerations for health information management, asking which elements are used to impact clinical decision-making in EHRs. Listing a variety of examples, including AI transcriptions made during clinical visits and videos used to evaluate patient gates, he suggested that national guidance is required to better inform standard practice. Another consideration for policy opportunities he covered encompassed AI licensing. Specifically, he considered the extent to which AI would assume practice rights and privileges, highlighting tasks such as medication refills, flu/viral workups, as well as administrative tasks that may encompass the filing of back-to-school letters, work letters, and Family and Medical Leave Forms.
Contrasting with de-skilling, Osterman suggested that AI could be used to help track procedures and practice, and with that information, identify knowledge gaps. He further explained that these systems could use that information to help give tips to practitioners to help them adhere to clinical standards in their practice. Lastly, he covered the need to adopt higher standards of data, even considering the emergence of LLMs. He further highlighted the emergence of model context protocols, which serve as an API for AI agents and permit patients the ability to query their medical record by a standard that is more accessible to them.
“AI use cases in cancer are going to continue to expand,” Osterman concluded. “Continued policy and regulatory clarity will support clinicians [in treating] patients with cancer.”