Looking Beyond Genomics to Advance Precision Medicine in Cancer Care

Fact checked by" Roman Fabbricatore
News
Article

Pathologists should try to educate oncologists about the sensitivity and specificity of assays to help optimize care plans, said David Rimm, MD, PhD.

David Rimm, MD, PhD  Yale University School of Medicine

David Rimm, MD, PhD

Yale University School of Medicine

It may be time for precision medicine to expand beyond the genomics realm and enter the protein space in the management of melanoma and other cancers, according to David Rimm, MD, PhD.

In a conversation with CancerNetwork®, Rimm highlighted some of his key initiatives in quantitative pathology that may have implications for elevating personalized cancer care. Of note, he discussed the results of a study published in JAMA Network Open showing that a machine learning algorithm can assist with the quantification of tumor-infiltrating lymphocytes (TILs) in melanoma.

Looking ahead, Rimm offered his perspective on how the field should evolve regarding the use of artificial intelligence (AI)–based tools, novel assays that may assist with the sequencing of antibody drug conjugates (ADCs), and collaborative efforts between cancer pathologists and medical oncologists.

“We need to look beyond genomic tests for companion diagnostics and for precision medicine,” Rimm stated. “There are methods now that can be met; the technology has improved so much over the last 30 years that we can now measure biomarkers in tissue just like we do in blood. As [such], we can have a limit of detection, a limit of quantification, and a reproducible assay that has the precision and accuracy of a blood-based assay, even though it’s based on tissue.”

Rimm is the Anthony N. Brady Professor of Pathology and a professor of Medicine (Medical Oncology) at Yale University School of Medicine.

CancerNetwork: You were an author of a study published in JAMA Network Open assessing a machine learning algorithm for TIL quantification in melanoma. What was the rationale for developing this algorithm, and what were the most significant findings to come from this study?

Rimm: The rationale for developing the algorithm was that it is important to know whether a patient with melanoma is going to progress or what therapy they need. We have already known in the literature that when you estimate the number of lymphocytes that infiltrate the tumor, the more lymphocytes, the more likely the patient is to be cured by the current therapeutic approaches. The way we do that now is have a pathologist look at a slide and estimate by judgment how many lymphocytes there are and give a semi-quantitative approach to that diagnosis. That depends on the pathologist; it does not have [much] precision.

Maybe some pathologists are accurate, and other pathologists are less accurate. But the fact is that if you have multiple pathologists do the same section, you will not get the same answer. That’s the driving force behind it: to increase the precision in precision medicine. To increase the precision of the pathologist making that diagnosis. Now that we have tools that can find these cells and count them in a more accurate way than we did because we have AI and image analysis, we have the tools to do something that has not been done in the past. That’s what we did.

The reason [the algorithm] got into JAMA Network Open is because we did a lot of [work on] it. We had [many] pathologists, we looked at a lot of reproducibility, and we had numerous non-pathologists operating a machine. [The study] showed that they had better precision than the pathologists who were reading the slide. Making the distinction of reading is what pathologists do [while] giving an expert opinion, and measurement is what machines do, given a numeric answer that is limited only by the accuracy and reproducibility of the machine.

The main conclusion moving forward is that the machines are more precise than the humans, but both the humans and the machines could be accurate. We did not try to compare accuracy because we did not have a big enough cohort for that. [The study] was retrospective; the tissues that we used were all retrospectively collected. To compare accuracy, we need to do a prospective trial, but we would not want to begin a prospective trial if we did not know if our machine worked or not, so we had to check our precision before we approached accuracy, which is our next step.

With the results of this study in mind, what other roles do you foresee AI playing in cancer research and/or management in the foreseeable future?

Everyone is enthusiastic about AI being able to do everything; I think that is an overstatement. However, AI can do some things well, and those things will save pathologists effort. There is no reason for a pathologist to look at normal tissue and say that it’s normal if a machine can do that as well and show anything that might not be normal to the pathologist. Then, the pathologist makes the diagnosis of abnormal. That is an application of AI that has already been FDA-approved and is likely to see increased usage because it saves pathologists work, money, and time. All those things are [quite] valuable. There are other things that people are working on AI for; for example, using AI-based image assessment to determine genomic alterations. Some of those might work, and some do not. That is an area that is much less likely to make it into the clinic in the near term because not all of them work. It might be that you can tell a BRAF mutation with 95% sensitivity and specificity, but a P53 mutation can only be assessed with 60% sensitivity and specificity. You are not going to replace sequencing with AI image analysis if it only can do 60%, and even 95% might not be good enough in some cases, although 95% to 98% is usually the threshold for making it into the clinic. Time will tell if that’s an AI application.

AI has also been applied in many other places, and we will probably have other roles in medicine and radiology in transcribing notes for clinicians. I know when I went to visit my doctor, we were talking about some symptom, and he said, “I do not think those 2 are related. Let’s check.” He went to this evidence-based AI machine that was right there, and he asked, “Is there any association between this symptom and this drug?” It could check all the literature using AI quickly, and it found none. That is the use of an AI by a PCP, a personal physician. That kind of AI will be common if it is not already.

Can you tell me about your work on the development of a standardized assay for ADC therapies? What kind of role might this assay play in the treatment decision-making process?

The ADCs have become a popular approach. There are close to 700 or 800 clinical trials that are currently active with ADCs, and I have heard there is over 250 different targets. When that happens, you want to now use precision medicine in that application as well. Precision medicine, mostly up to this point, has been looking for mutations that the patients have and then looking for drugs that take advantage of that vulnerability due to the mutation.

I would argue that [it is] time for precision medicine to move into the protein space and assess the levels, or the comparative levels, of the 2 protein targets, both of which have a conjugated drug, but you do not know which one matches to the patient’s tumor. The whole idea behind our new assay that we call Troplex™—and a pipeline of assays that we are currently working on in the lab—is that there are some ADCs that [receive approval] in a given setting, and then at that same setting there is a second, third, or fourth ADC. How do you decide which one of the ADCs to use first?

In second line and higher breast cancer, you can either give an ADC that targets TROP2, or you can give an ADC that targets HER2. HER2 and TROP2 are 2 different tags that are not related—as far as we know—but they are transmembrane proteins that present a tag that you can use an ADC with, where an antibody is conjugated, and the payload is the same. In some cases, the TROP2 is also connected to a topoisomerase inhibitor, which is also the case with HER2. In either case, how do you decide as the oncologist? There is no way to decide other than comparing trials, which is against the rules for oncologists. Interestingly, I have tried to compare the trials. In one trial, the control group does better than the treatment group vs the other trial. That is evidence that you should not compare trials.

I would argue that the best way to decide which drug to give the patient first is to do an assay like the one we have designed that can accurately measure the level of each target. Historically, we have done target assessment by a technology called immunohistochemistry [IHC], and we have read it; like I said before, reading is a subjective pathologist opinion that may vary between pathologists depending on the robustness of the assay. Measuring it is what we have done; historically, [there has been] no way to measure on slides that was completely quantitative.

But we have developed a way that is close to completely quantitative and close to completely objective, where we use a standardized set of cell lines with known atom moles per square millimeter of protein. Then, we assess the signal of those cell lines in the same assay run. That allows us to assess the level of that target in that patient’s tumor. If you know how much TROP2 there is and how much HER2 there is, whichever is greater [determines] the drug they should get first.

It is taking precision medicine out of the genomics realm and into the protein realm. There is no prospective evidence that this will work, but there is no evidence [that it will not]. It is an evidence-free zone, as they say; there is no evidence for which drug you should get first, but there is evidence in the literature that it makes a difference. That is, patients almost always do better on the first drug you give them than the second drug. If you want the patient to do better overall, you should use your best drug first. You should use the drug that best matches the patient tumor as assessed by a measurement of the tag or the target.

What other work on tissue biomarker research holds the most promise in melanoma, lung cancer, breast cancer, or other patient populations? How might this research help improve patient outcomes in the future?

One of the big clinical questions is, “Who should get which drug?” This has been a question for as long as drugs have existed. As we have gotten better at precision medicine, we have [developed] more tools to help us predict responses to drugs....We have had a good run in DNA-based diagnostics, and we are still improving them. But it turns out that drugs are not always working with just 1 mutation. One mutation does not always direct us to which patient should get which drug. That is where we are now: we need to think beyond DNA into the other biomolecules that are out there that could be taken advantage of for helping select the right drug for the right patient.

My particular interest has always been protein-based methods, but there are methods that relate to circulating DNA, whether it’s bound or free circulating DNA. There are methods that are based on the RNA or expression profiles in RNA. There are a lot of other methods out there that we will be watching closely in the next 5 to 10 years to change the way we prescribe drugs so that we do not give the same drug to everybody but rather give the best drug to the right patient.

How can pathologists more effectively collaborate with medical oncologists and those from other departments to optimize the quality of care for different cancer populations?

I would argue that we already are collaborating quite well. Our tumor boards always have a pathologist and an oncologist working together to provide the optimal therapy for the patient. But one of the roles that I think pathologists often play that they could do a better job of is explaining the science behind the assay. Many oncologists read the FDA approvals and say, “Okay, that must be a good assay.” Unfortunately, FDA approvals are not always that great. Some are really good, and some are pretty weak. The pathologist can help the oncologist understand the robustness of the assay so that when they make their decision about a patient, they might say, “Oh, this is a great assay, so go ahead and do it.” Or they might say, “This assay is pretty lame.” They might want to either try some other assays, or they might want to consider other clinical factors.

There are a lot of judgments that oncologists make that are not strictly based on objective facts. When an assay is done, it is sometimes considered an objective fact. Sometimes it is, like an EGFR mutation is 99.9% accurate and precise, but a PD-L1 expression by IHC is probably about 50% or 60% accurate and precise. As pathologists, when we work with the oncologist, we try to educate them about the sensitivity and specificity of the assay to help them use that information to help make the best decision for their patients.

Reference

Aung TN, Liu M, Su D, et al. Pathologist-read vs AI-driven assessment of tumor-infiltrating lymphocytes in melanoma. JAMA Netw Open. 2025;8(7):e2518906. doi:10.1001/jamanetworkopen.2025.18906

Recent Videos
Beyond DNA-centric diagnostics, protein-based methods may play a role in accurately matching patients with the most effective therapies.
David Rimm, MD, PhD, discussed how AI tools may help automate routine tasks for pathologists and predict genomic alterations from images.
David Rimm, MD, PhD, shared the rationale behind developing an AI-driven tool for quantifying tumor-infiltrating lymphocytes in melanoma.
The development of multimodal biomarkers may help predict response to immunotherapy among patients with melanoma and other malignancies.
A machine learning method for scoring tumor-infiltrating lymphocytes may address variability in pathologist measurements.
Experts weigh in on tumor-informed testing, false positives, relevant trial data, and other key concepts related to circulating tumor DNA.
Hydration and a healthy, well-balanced diet may mitigate fatigue among patients undergoing treatment for cancer.
Related Content