Ted A. James, MD, MHCM, FACS, spoke about integrating artificial intelligence into oncology care.
The history of artificial intelligence (AI) in health care can be traced back to the 1970s when expert systems were developed to assist physicians in decision-making processes.1 However, it was not until the recent advancements in machine learning, particularly deep learning, that AI began to show significant potential in various medical applications, such as disease diagnosis, drug discovery, and personalized treatment planning.2 As AI continues to evolve, its integration into health care is expected to revolutionize the way medical services are delivered, enabling more accurate diagnoses, personalized treatments, and improved patient outcomes, while addressing the challenges of increasing health care costs and aging populations.3
While AI has shown promising potential in various medical applications, it is still in a developmental stage, and its implementation in healthcare comes with risks and uncertainties. One major concern is the potential for AI systems to perpetuate or amplify biases present in the data used for training, leading to inaccurate or discriminatory outcomes.4 Additionally, the complexity of AI models can make it challenging to ensure transparency and interpretability, which are crucial for building trust and accountability in medical decision-making.5 Regulatory frameworks and ethical guidelines for the safe and responsible use of AI in health care are still evolving, and addressing these concerns will be crucial for the successful integration of AI into clinical practice.6
In this article, Ted A. James, MD, MHCM, FACS, chief, breast surgical oncology at Beth Israel Deaconess Medical Center and associate professor of surgery at Harvard Medical School in Boston, Massachusetts, discusses AI’s current role in health care, weighing the potential risks and benefits of integrating this technology, and focuses on oncology applications.
James / AI holds remarkable promise and potential in health care, with numerous pilot studies and test cases showcasing potential advantages and benefits that could transform patient care. However, despite my optimism and enthusiasm for AI, I would say that it is not fully ready for broad application. Several challenges remain, including enhancing AI algorithm accuracy, ensuring data privacy and security, and addressing clinical validation and regulatory considerations before AI can be widely deployed in frontline medicine. Efforts are under way to overcome these hurdles and get AI ready for general adoption.
James / Current applications of AI in health care range from diagnostic assistance to improving operational efficiencies. For example, AI systems are being used to monitor patients following hospital discharge to identify early signs of postoperative or posttreatment complications.
AI is increasingly used to support health care professionals by offering insights for better decision-making and predicting patient outcomes, including preventing potential health issues before they escalate. Several test cases are using AI to automate administrative tasks to alleviate the administrative workload on physicians, allowing more direct face time with patients.
There are also groups exploring AI for drug discovery, which is very exciting. In these ways, AI is starting to improve our understanding and management of care —the applications are very wide-ranging.
James / This is an area that I’m very excited about as an oncologist, and I think the field is ripe for AI interventions, especially for precision medicine. Utilizing AI to incorporate tumor characteristics with a patient’s genetic profile for prognostic indicators could significantly outperform current prediction models.
AI also shows promise in risk assessment and predictive analytics, allowing us to proactively improve patient outcomes. There are also opportunities to use AI to enhance patient education and engagement.
James / I’m a strong advocate for oncologists exploring these opportunities within AI. Recognizing this technology as the future direction of medicine, the sooner we engage with AI, the more effectively we can guide its integration to benefit oncology practice and improve patient outcomes.
Some of the most impactful clinical applications involve personalized treatment and streamlining administrative processes in practice. For example, AI can play a role in personalized patient care by identifying individuals at higher risk of treatment complications or allowing customized care plans tailored to specific patient characteristics.
On the administrative front, AI can help streamline operational workflows. AI is currently being used to predict which patients are most likely to be a no-show. It can then automatically contact these patients to confirm upcoming appointments and, if necessary, quickly fill any gaps by offering available slots to other patients. As oncologists become more familiar with these innovations, the collective experience and knowledge gained will help advance the field. I believe this will lead to better clinical practices and outcomes for patients.
James / One of the challenges with AI in health care is its accuracy. For clinicians to trust AI, they need transparency about how these tools function, supported by validation studies and peer-reviewed research. Explainable AI, which allows us to understand how conclusions are drawn and what data are used, is important in building this trust. Like any medical technology, trust in AI will be built on rigorous testing, reliable data, and adherence to regulatory standards.
James / Cybersecurity breaches are a significant concern. An emerging threat in this area is the medical deepfake, a situation where AI generates false medical information and integrates it into digital patient records. AI could modify diagnostic imaging tests or lab results. The potential alteration or falsification of data has serious implications for patient safety. This is a concern that goes beyond the typical concerns over privacy breaches.
AI also has a few inherent problems that need to be addressed. The possibility of AI generating fictitious information or “AI hallucinations” is a recognized pitfall. We need safeguards to prevent the spread of inaccurate data. Another pressing issue is AI’s potential to perpetuate existing societal biases. Without deliberate efforts to identify and correct these biases, AI systems may inadvertently replicate them in health care settings. Finally, there is the broader risk of dehumanizing patient care if AI is not implemented thoughtfully and with sensitivity. We want to avoid diminishing the personal aspects of patient care.
James / I think clinicians should discuss the capabilities and limitations of AI honestly and openly with their patients. It is important not to oversell or undersell the technology. AI has strengths and weaknesses, and we should be transparent about that. It’s also important to emphasize that AI tools are a complement, not replacement, for human clinical judgment. People are inevitably going to turn to AI for information and self-management, but I do not think that we should necessarily be antagonistic about that. Although there are valid concerns about patients using AI directly for self-care, with proper safeguards and validation, AI could become a digital extension of the clinical workforce, reaching patients in ways that the current human clinical workforce cannot do on its own.
Again, the more involved we are in the development of this technology, the better positioned we’ll be to guarantee that patients have access to credible and reliable information through AI.
James / The true power of AI in oncology, and medicine in general, comes from leveraging large medical databases to enhance diagnostic precision and learning algorithms.7 For example, Google’s Med-PaLM 2 is a large language model designed specifically for medical research and care.8 It has successfully passed the United States Medical Licensing Examination. In the near future, I think we can expect to have expert-level responses from AI when it learns from accurate data.
Another project I’m aware of is I3LUNG, which showcases AI’s ability to use big data to tailor cancer treatments.9 The project focuses on non–small cell lung cancer (NSCLC) and aims to personalize care and enhance outcomes by integrating multiomics data.
James / The most promising areas, in my opinion, lie in precision medicine, where AI could tailor treatments to individual genetic profiles. I’m fascinated by the idea of using AI to customize treatments based on a person’s unique genetic makeup. It has the potential to transform how we approach disease management and therapy. This move toward personalized medicine is something I see having the potential to improve treatment outcomes significantly.
AI could also have a significant impact on patient engagement and self-management. By utilizing AI tools, patients can take a more active role in their health care, which can lead to better health outcomes.
If done properly, AI could help us overcome current challenges and introduce innovative solutions for disease treatment, prevention, and management. Integrating AI into medicine could be a defining moment in the evolution of health care.
James / Addressing who bears responsibility when complications or harm occurs due to AI systems in health care is complex. It’s likely there will be shared accountability.
Technology developers need to ensure their AI systems undergo appropriate testing and validation. Health care organizations that use these technologies have the responsibility of implementing cybersecurity measures along with all of the checks and balances associated with introducing a new technology. Physicians using AI will have to exercise due diligence, following guidelines and best practices of using this technology responsibly. Patients also play a role in accountability, through informed engagement, using AI tools in conjunction with professional medical advice, and being careful about the security of their personal health data.
Hopefully, this process of shared accountability will mitigate risks and safeguard against undue harm. It can also promote greater collaboration regarding the safe and effective use of AI in health care.
Activity
This activity was written by PER® editorial staff under faculty guidance and review. The Q&A portion of the activity was transcribed from a recorded interview with the faculty and edited by faculty and PER® editorial staff for clarity.
Release Date: May 1, 2024
Expiration Date: May 1, 2025
Accreditation/Credit Designation
Physicians’ Education Resource®, LLC, is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.
Physicians’ Education Resource®, LLC, designates this enduring material for a maximum of 0.5 AMA PRA Category 1 Credits™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.
Activity Overview
This continuing medical education (CME) activity provides expert insight regarding the applications of artificial intelligence (AI) into the oncology field. This program discusses risks, benefits, and impact of AI in healthcare.
Acknowledgment of Support
This activity is funded by PER®.
Instructions for Participation/How to Receive Credit
Complete the activity (including pre- and post-activity assessments).
Answer the evaluation questions.
Request credit using the drop-down menu.
You may immediately download your certificate.
Start Online Activity
Learning Objectives
Upon successful completion of this activity, you should be better prepared to:
Evaluate the potential risks and benefits of incorporating AI technology in health care, considering factors such as patient safety, data privacy, and ethical implications.
Analyze the potential applications of AI in the oncology setting, including areas such as diagnosis, treatment planning, and patient monitoring.
Understand the key factors that contribute to clinicians’ confidence in the accuracy and reliability of AI tools.