Exploring the Benefits and Risks of AI in Oncology

Publication
Article
OncologyONCOLOGY Vol 38, Issue 5
Volume 38
Issue 5
Pages: 214-217

Ted A. James, MD, MHCM, FACS, spoke about integrating artificial intelligence into oncology care.

Ted A. James, MD, MHCM, FACS

Associate Professor of Surgery,

Harvard Medical School

Chief, Breast Surgical Oncology, BIDMC

Co-Director, BIDMC BreastCare Center

Surgery Vice Chair, Academic Affairs

Beth Israel Deaconess Medical Center

Boston, MA

Ted A. James, MD, MHCM, FACS

Associate Professor of Surgery,

Harvard Medical School

Chief, Breast Surgical Oncology, BIDMC

Co-Director, BIDMC BreastCare Center

Surgery Vice Chair, Academic Affairs

Beth Israel Deaconess Medical Center

Boston, MA

The history of artificial intelligence (AI) in health care can be traced back to the 1970s when expert systems were developed to assist physicians in decision-making processes.1 However, it was not until the recent advancements in machine learning, particularly deep learning, that AI began to show significant potential in various medical applications, such as disease diagnosis, drug discovery, and personalized treatment planning.2 As AI continues to evolve, its integration into health care is expected to revolutionize the way medical services are delivered, enabling more accurate diagnoses, personalized treatments, and improved patient outcomes, while addressing the challenges of increasing health care costs and aging populations.3

While AI has shown promising potential in various medical applications, it is still in a developmental stage, and its implementation in healthcare comes with risks and uncertainties. One major concern is the potential for AI systems to perpetuate or amplify biases present in the data used for training, leading to inaccurate or discriminatory outcomes.4 Additionally, the complexity of AI models can make it challenging to ensure transparency and interpretability, which are crucial for building trust and accountability in medical decision-making.5 Regulatory frameworks and ethical guidelines for the safe and responsible use of AI in health care are still evolving, and addressing these concerns will be crucial for the successful integration of AI into clinical practice.6

In this article, Ted A. James, MD, MHCM, FACS, chief, breast surgical oncology at Beth Israel Deaconess Medical Center and associate professor of surgery at Harvard Medical School in Boston, Massachusetts, discusses AI’s current role in health care, weighing the potential risks and benefits of integrating this technology, and focuses on oncology applications.

Q / Is AI ready for widespread use in health care? If not, what key advancements need to occur before AI can be used in frontline medical settings?

James / AI holds remarkable promise and potential in health care, with numerous pilot studies and test cases showcasing potential advantages and benefits that could transform patient care. However, despite my optimism and enthusiasm for AI, I would say that it is not fully ready for broad application. Several challenges remain, including enhancing AI algorithm accuracy, ensuring data privacy and security, and addressing clinical validation and regulatory considerations before AI can be widely deployed in frontline medicine. Efforts are under way to overcome these hurdles and get AI ready for general adoption.

Q / Can you describe current AI applications in health care and highlight areas of valuable utility?

James / Current applications of AI in health care range from diagnostic assistance to improving operational efficiencies. For example, AI systems are being used to monitor patients following hospital discharge to identify early signs of postoperative or posttreatment complications.

AI is increasingly used to support health care professionals by offering insights for better decision-making and predicting patient outcomes, including preventing potential health issues before they escalate. Several test cases are using AI to automate administrative tasks to alleviate the administrative workload on physicians, allowing more direct face time with patients.

There are also groups exploring AI for drug discovery, which is very exciting. In these ways, AI is starting to improve our understanding and management of care —the applications are very wide-ranging.

Q / Within oncology, what clinical scenarios show promise for impactful AI intervention and decision support?

James / This is an area that I’m very excited about as an oncologist, and I think the field is ripe for AI interventions, especially for precision medicine. Utilizing AI to incorporate tumor characteristics with a patient’s genetic profile for prognostic indicators could significantly outperform current prediction models.

AI also shows promise in risk assessment and predictive analytics, allowing us to proactively improve patient outcomes. There are also opportunities to use AI to enhance patient education and engagement.

Q / At its current state, should oncologists explore avenues to pilot and operationalize AI tools in their practice? If so, what specific clinical uses or workflows could benefit most?

James / I’m a strong advocate for oncologists exploring these opportunities within AI. Recognizing this technology as the future direction of medicine, the sooner we engage with AI, the more effectively we can guide its integration to benefit oncology practice and improve patient outcomes.

Some of the most impactful clinical applications involve personalized treatment and streamlining administrative processes in practice. For example, AI can play a role in personalized patient care by identifying individuals at higher risk of treatment complications or allowing customized care plans tailored to specific patient characteristics.

On the administrative front, AI can help streamline operational workflows. AI is currently being used to predict which patients are most likely to be a no-show. It can then automatically contact these patients to confirm upcoming appointments and, if necessary, quickly fill any gaps by offering available slots to other patients. As oncologists become more familiar with these innovations, the collective experience and knowledge gained will help advance the field. I believe this will lead to better clinical practices and outcomes for patients.

Q / How can clinicians develop confidence in the accuracy and reliability of AI-powered tools? What factors or safeguards allow AI outputs to be deemed trustworthy?

James / One of the challenges with AI in health care is its accuracy. For clinicians to trust AI, they need transparency about how these tools function, supported by validation studies and peer-reviewed research. Explainable AI, which allows us to understand how conclusions are drawn and what data are used, is important in building this trust. Like any medical technology, trust in AI will be built on rigorous testing, reliable data, and adherence to regulatory standards.

Q / Please outline the potential pitfalls of AI, such as vulnerabilities and privacy concerns.

James / Cybersecurity breaches are a significant concern. An emerging threat in this area is the medical deepfake, a situation where AI generates false medical information and integrates it into digital patient records. AI could modify diagnostic imaging tests or lab results. The potential alteration or falsification of data has serious implications for patient safety. This is a concern that goes beyond the typical concerns over privacy breaches.

AI also has a few inherent problems that need to be addressed. The possibility of AI generating fictitious information or “AI hallucinations” is a recognized pitfall. We need safeguards to prevent the spread of inaccurate data. Another pressing issue is AI’s potential to perpetuate existing societal biases. Without deliberate efforts to identify and correct these biases, AI systems may inadvertently replicate them in health care settings. Finally, there is the broader risk of dehumanizing patient care if AI is not implemented thoughtfully and with sensitivity. We want to avoid diminishing the personal aspects of patient care.

Q / What strategies would you recommend for clinicians to effectively communicate about AI capabilities and limitations to patients? How should providers address situations where patients have independently used AI for self-diagnosis or treatment guidance?

James / I think clinicians should discuss the capabilities and limitations of AI honestly and openly with their patients. It is important not to oversell or undersell the technology. AI has strengths and weaknesses, and we should be transparent about that. It’s also important to emphasize that AI tools are a complement, not replacement, for human clinical judgment. People are inevitably going to turn to AI for information and self-management, but I do not think that we should necessarily be antagonistic about that. Although there are valid concerns about patients using AI directly for self-care, with proper safeguards and validation, AI could become a digital extension of the clinical workforce, reaching patients in ways that the current human clinical workforce cannot do on its own.

Again, the more involved we are in the development of this technology, the better positioned we’ll be to guarantee that patients have access to credible and reliable information through AI.

Q / How might the integration of medical database information with machine learning models unlock new potential and enhance the capabilities of AI in health care applications?

James / The true power of AI in oncology, and medicine in general, comes from leveraging large medical databases to enhance diagnostic precision and learning algorithms.7 For example, Google’s Med-PaLM 2 is a large language model designed specifically for medical research and care.8 It has successfully passed the United States Medical Licensing Examination. In the near future, I think we can expect to have expert-level responses from AI when it learns from accurate data.

Another project I’m aware of is I3LUNG, which showcases AI’s ability to use big data to tailor cancer treatments.9 The project focuses on non–small cell lung cancer (NSCLC) and aims to personalize care and enhance outcomes by integrating multiomics data.

Q / Looking ahead, what are the most promising areas or clinical domains where AI could have a transformative impact within health care?

James / The most promising areas, in my opinion, lie in precision medicine, where AI could tailor treatments to individual genetic profiles. I’m fascinated by the idea of using AI to customize treatments based on a person’s unique genetic makeup. It has the potential to transform how we approach disease management and therapy. This move toward personalized medicine is something I see having the potential to improve treatment outcomes significantly.

AI could also have a significant impact on patient engagement and self-management. By utilizing AI tools, patients can take a more active role in their health care, which can lead to better health outcomes.

If done properly, AI could help us overcome current challenges and introduce innovative solutions for disease treatment, prevention, and management. Integrating AI into medicine could be a defining moment in the evolution of health care.

Q / Given the litigious nature of the medical field and the potential for AI systems to be infiltrated with malicious information or make errors that negatively impact patient care, who should bear responsibility when something goes wrong due to an AI system being used by health care providers and professionals? How might assigning responsibility shape the future adoption and use of AI in medicine?

James / Addressing who bears responsibility when complications or harm occurs due to AI systems in health care is complex. It’s likely there will be shared accountability.

Technology developers need to ensure their AI systems undergo appropriate testing and validation. Health care organizations that use these technologies have the responsibility of implementing cybersecurity measures along with all of the checks and balances associated with introducing a new technology. Physicians using AI will have to exercise due diligence, following guidelines and best practices of using this technology responsibly. Patients also play a role in accountability, through informed engagement, using AI tools in conjunction with professional medical advice, and being careful about the security of their personal health data.

Hopefully, this process of shared accountability will mitigate risks and safeguard against undue harm. It can also promote greater collaboration regarding the safe and effective use of AI in health care.


References

  1. Kulikowski CA. Beginnings of Artificial Intelligence in Medicine (AIM): computational artifice assisting scientific inquiry and clinical art - with reflections on present AIM challenges. Yearb Med Inform. 2019;28(1):249-256. doi:10.1055/s-0039-1677895
  2. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. doi:10.1038/nature21056
  3. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2(10):719-731. doi:10.1038/s41551-018-0305-z
  4. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342
  5. Arrieta AB, Diaz-Rodriguez N, Ser JD, et al. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. 2020;58:82-115. doi:10.1016/j.inffus.2019.12.012
  6. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7
  7. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172-180. doi:10.1038/s41586-023-06291-2
  8. DePeau-Wilson M. Google AI performs at ‘expert’ level on U.S. Medical Licensing Exam. Medpage Today. March 14, 2023. Accessed April 10, 2024. https://www.medpagetoday.com/special-reports/exclusives/103522
  9. Prelaj A, Ganzinelli M, Trovo F, et al. The EU-funded I(3)LUNG project: integrative science, intelligent data platform for individualized LUNG cancer care with immunotherapy. Clin Lung Cancer. 2023;24(4):381-387. doi:10.1016/j.cllc.2023.02.005

Activity

This activity was written by PER® editorial staff under faculty guidance and review. The Q&A portion of the activity was transcribed from a recorded interview with the faculty and edited by faculty and PER® editorial staff for clarity.

Release Date: May 1, 2024

Expiration Date: May 1, 2025


Accreditation/Credit Designation

Physicians’ Education Resource®, LLC, is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.


Physicians’ Education Resource®, LLC, designates this enduring material for a maximum of 0.5 AMA PRA Category 1 Credits™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.


Activity Overview
This continuing medical education (CME) activity provides expert insight regarding the applications of artificial intelligence (AI) into the oncology field. This program discusses risks, benefits, and impact of AI in healthcare.



Acknowledgment of Support

This activity is funded by PER®.


Instructions for Participation/How to Receive Credit

Complete the activity (including pre- and post-activity assessments).

Answer the evaluation questions.

Request credit using the drop-down menu.


You may immediately download your certificate.


Start Online Activity


Learning Objectives

Upon successful completion of this activity, you should be better prepared to:

Evaluate the potential risks and benefits of incorporating AI technology in health care, considering factors such as patient safety, data privacy, and ethical implications.

Analyze the potential applications of AI in the oncology setting, including areas such as diagnosis, treatment planning, and patient monitoring.

Understand the key factors that contribute to clinicians’ confidence in the accuracy and reliability of AI tools.



Recent Videos
An 80% sensitivity for lung cancer was observed with the liquid biopsy, with high sensitivity observed for early-stage disease, as well.
Patients who face smoking stigma, perceive a lack of insurance, or have other low-dose CT related concerns may benefit from blood testing for lung cancer.
The Together for Supportive Cancer Care coalition may advance the national conversation in ensuring comprehensive care for all patients with cancer.
Health care organizations have come together to form the Together for Supportive Cancer Care coalition to address gaps in supportive cancer care services.
Further optimizing a PROTAC that targets MDM2 may lead to human clinical trials among patients with cancer harboring p53 mutations.
Subsequent testing among patients in a prospective study may affirm the ability of cfDNA sequencing to detect cancers in those with Li-Fraumeni syndrome.
cfDNA sequencing may allow for more accessible, frequent, and sensitive testing compared with standard surveillance in Li-Fraumeni syndrome.
STX-478 showed efficacy in patients with advanced solid tumors regardless of whether they had kinase domain or helical PI3K mutations.
STX-478 may avoid adverse effects associated with prior PI3K inhibitors that lack selectivity for the mutated protein vs the wild-type protein.
Related Content