Jump to content

Turner 2008 Selective publication of antidepressant trials and its influence on apparent efficacy.


Altostrata

Recommended Posts

  • Administrator

Antidepressant effectiveness has been overstated by 11% to 69%; this should be included in a risk-benefit analysis. This paper has been cited by 329 others.

 

N Engl J Med. 2008 Jan 17;358(3):252-60.

Selective publication of antidepressant trials and its influence on apparent efficacy.

Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R.

 

Source

 

Department of Psychiatry, Oregon Health and Science University, Portland, OR, USA. turnere@ohsu.edu

 

Abstract at http://www.ncbi.nlm.nih.gov/pubmed/18199864 Full text at http://www.nejm.org/doi/full/10.1056/NEJMsa065779#t=article

 

BACKGROUND:

 

Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials--and the outcomes within those trials--can lead to unrealistic estimates of drug effectiveness and alter the apparent risk-benefit ratio.

 

METHODS:

 

We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set.

 

RESULTS:

 

Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.

 

CONCLUSIONS:

 

We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.

 

 

From the paper:

 

Medical decisions are based on an understanding of publicly reported clinical trials.1,2 If the evidence base is biased, then decisions based on this evidence may not be the optimal decisions. For example, selective publication of clinical trials, and the outcomes within those trials, can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.3,4

 

Attempts to study selective publication are complicated by the unavailability of data from unpublished trials. Researchers have found evidence for selective publication by comparing the results of published trials with information from surveys of authors,5 registries,6 institutional review boards,7,8 and funding agencies,9,10 and even with published methods.11 Numerous tests are available to detect selective-reporting bias, but none are known to be capable of detecting or ruling out bias reliably.12-16

 

In the United States, the Food and Drug Administration (FDA) operates a registry and a results database.17 Drug companies must register with the FDA all trials they intend to use in support of an application for marketing approval or a change in labeling. The FDA uses this information to create a table of all studies.18 The study protocols in the database must prospectively identify the exact methods that will be used to collect and analyze data. Afterward, in their marketing application, sponsors must report the results obtained using the prespecified methods. These submissions include raw data, which FDA statisticians use in corroborative analyses. This system prevents selective post hoc reporting of favorable trial results and outcomes within those trials.

 

How accurately does the published literature convey data on drug efficacy to the medical community? To address this question, we compared drug efficacy inferred from the published literature with drug efficacy according to FDA reviews.

....

 

For each of the 12 drugs, the effect size derived from the journal articles exceeded the effect size derived from the FDA reviews (sign test, P<0.001) (Figure 3B). The magnitude of the increases in effect size between the FDA reviews and the published reports ranged from 11 to 69%, with a median increase of 32%. A 32% increase was also observed in the weighted mean effect size for all drugs combined, from 0.31 (95% CI, 0.27 to 0.35) to 0.41 (95% CI, 0.36 to 0.45).

 

....

Discussion

We found a bias toward the publication of positive results. Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome. We analyzed these data in terms of the proportion of positive studies and in terms of the effect size associated with drug treatment. Using both approaches, we found that the efficacy of this drug class is less than would be gleaned from an examination of the published literature alone. According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. The statistical significance of a study's results was strongly associated with whether and how they were reported, and the association was independent of sample size. The study outcome also affected the chances that the data from a participant would be published. As a result of selective reporting, the published literature conveyed an effect size nearly one third larger than the effect size derived from the FDA data.

 

Previous studies have examined the risk–benefit ratio for drugs after combining data from regulatory authorities with data published in journals.3,30-32 We built on this approach by comparing study-level data from the FDA with matched data from journal articles. This comparative approach allowed us to quantify the effect of selective publication on apparent drug efficacy.

 

Our findings have several limitations: they are restricted to antidepressants, to industry-sponsored trials registered with the FDA, and to issues of efficacy (as opposed to “real-world” effectiveness33). This study did not account for other factors that may distort the apparent risk–benefit ratio, such as selective publication of safety issues, as has been reported with rofecoxib (Vioxx, Merck)34 and with the use of selective serotonin-reuptake inhibitors for depression in children.3 Because we excluded articles covering multiple studies, we probably counted some studies as unpublished that were — technically — published. The practice of bundling negative and positive studies in a single article has been found to be associated with duplicate or multiple publication,35 which may also influence the apparent risk–benefit ratio.

 

There can be many reasons why the results of a study are not published, and we do not know the reasons for nonpublication. Thus, we cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, decisions by journal editors and reviewers not to publish submitted manuscripts, or both.

 

We wish to clarify that nonsignificance in a single trial does not necessarily indicate lack of efficacy. Each drug, when subjected to meta-analysis, was shown to be superior to placebo. On the other hand, the true magnitude of each drug's superiority to placebo was less than a diligent literature review would indicate.

 

We do not mean to imply that the primary methods agreed on between sponsors and the FDA are necessarily preferable to alternative methods. Nevertheless, when multiple analyses are conducted, the principle of prespecification controls the rate of false positive findings (type I error), and it prevents HARKing,36 or hypothesizing after the results are known.

 

It might be argued that some trials did not merit publication because of methodologic flaws, including problems beyond the control of the investigator. However, since the protocols were written according to international guidelines for efficacy studies37 and were carried out by companies with ample financial and human resources, to be fair to the people who put themselves at risk to participate, a cogent public reason should be given for failure to publish.

 

Selective reporting deprives researchers of the accurate data they need to estimate effect size realistically. Inflated effect sizes lead to underestimates of the sample size required to achieve statistical significance. Underpowered studies — and selectively reported studies in general — waste resources and the contributions of investigators and study participants, and they hinder the advancement of medical knowledge. By altering the apparent risk–benefit ratio of drugs, selective publication can lead doctors to make inappropriate prescribing decisions that may not be in the best interest of their patients and, thus, the public health.

 

Dr. Turner reports having served as a medical reviewer for the Food and Drug Administration. No other potential conflict of interest relevant to this article was reported.

....

 

Source Information

 

From the Departments of Psychiatry (E.H.T., A.M.M.) and Pharmacology (E.H.T.), Oregon Health and Science University; and the Behavioral Health and Neurosciences Division, Portland Veterans Affairs Medical Center (E.H.T., A.M.M., R.A.T.) — both in Portland, OR; the Department of Psychology, Kent State University, Kent, OH (E.L.); the Department of Psychology, University of California–Riverside, Riverside (R.R.); and Harvard University, Cambridge, MA (R.R.).

....

This is not medical advice. Discuss any decisions about your medical care with a knowledgeable medical practitioner.

"It has become appallingly obvious that our technology has surpassed our humanity." -- Albert Einstein

All postings © copyrighted.

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

Terms of Use Privacy Policy