An analysis of 164 phase III breast cancer trials shows bias in reporting the primary endpoint and toxicity results, according to results of a recent study.
An analysis of 164 phase III breast cancer trials shows bias in reporting the primary endpoint and toxicity results. Thirty-three percent of the trials published between 1995 and 2011 exhibited biased reporting of the prespecified primary endpoint. Sixty-seven percent showed bias in reporting toxicity rates.
“Spin and bias,” according to the research, is found in a high proportion of phase III breast cancer publications from a wide range of medical and scientific peer-reviewed journals.
Ian F. Tannock, MD, PhD, and colleagues from the Princess Margaret Hospital and the University of Toronto in Canada found that the primary endpoint was more likely to be mentioned in the concluding statement of the study abstract if the results favored the experimental over the control arm. When the primary endpoint was negative, the majority (52%) of the trials suggested clinical benefit by focusing on secondary endpoint results. “Spin was used frequently to influence, positively, the interpretation of negative trials, by emphasizing the apparent benefit of a secondary endpoint,” state the authors.
Bias was also reflected in reporting of treatment-related toxicity events.
Only a small number of the trial articles listed high-grade toxicity frequencies-only 32% mentioned high-grade toxicities in the abstract. When the primary endpoint of a study was reached, the publication was more likely to underreport toxicity events in general. These results are published in the Annals of Oncology.
Bias was defined by the authors as “inappropriate reporting of the primary endpoint and toxicity, with emphasis on reporting of these outcomes in the abstract.” Spin was defined as “the use of words in the concluding statement of the abstract to suggest that a trial with a negative primary endpoint was positive based on some apparent benefits shown in one or more secondary endpoints.”
Only two-thirds of the publications analyzed were funded by industry; one-quarter were government or academically funded studies. The funding source for the rest were not reported. Interestingly, the analysis found no link between industry-sponsored trials and biased reporting of either efficacy or toxicity.
Tannock and the other authors call for clinicians, reviewers, regulators, and journal editors to be aware of bias, and call for guidelines for improved, unbiased reporting of both efficacy and toxicity of clinical trials.