First published on Mad in America September 25th 2024
After we published our MIA Report on the STAR*D Scandal and set up a petition urging the American Journal of Psychiatry to retract the 2006 paper that told of the summary STAR*D results, I figured that Awais Aftab would respond with a post on his Substack blog. I actually do genuinely appreciate that he engages with pieces we publish, because such back and forth represents a type of debate, and it then becomes easier for the public to see the merits of the differing “presentations of fact.”
Aftab has presented himself as a psychiatrist who is open to criticisms of psychiatry and, at the same time, as a defender of the profession. He has gained public stature in that regard, as the go-to person for responding to criticisms of the profession.
In this case, Aftab presents his post as a response to Pigott’s re-analysis of the STAR*D results. While he doesn’t refer directly to MIA’s report on the STAR*D scandal, it is clear that he has read it and that this is what prompted his post. He closes his piece with an accusation that MIA and I have misled the public with our reporting on the STAR*D study, which is the reason for this blog.
Our MIA Report tells of the “STAR*D Scandal.” The scandal, however, is not simply that the STAR*D investigators violated the protocol in ways that inflated the remission rate. The scandal is that the NIMH and the profession promoted the fabricated 67% remission rate to the public as evidence of the effectiveness of antidepressants, and did so even after Pigott and colleagues published a paper in 2010 that told of how it was an inflated result born of research misconduct. As such, the MIA Report is an account of a profession’s collective failure to inform the public of the true results of this study.
What is of interest here, in this “back and forth,” is to first see whether Aftab’s post tells of the larger scandal, and second to see whether his accusation that MIA has misled the public has merit. And if it doesn’t, it can be seen as adding a new dollop of deceit into the STAR*D scandal.
Aftab’s Post
In the first part of his post, Aftab writes of how Pigott’s RIAT reanalysis of patient level-data in the STAR*D trial reveals that the true remission rate during the acute phase of the study was 35% instead of the reported 67%. Aftab acknowledges that this difference is concerning.
“There are two kinds of discussions we can have about this STAR*D reanalysis. The first is focused on the deviations from the original protocol and the lack of transparency about the significance of these deviations in scientific reporting. This is an important story to tell, and I’m glad that it has been told. (I am doubtful that it constitutes scientific misconduct or fraud, as some critics have alleged, but the lack of transparency is certainly concerning and difficult to defend.)”
If you deconstruct that paragraph, you can see that it subtly functions in three ways:
- He is presenting himself as one who cares about scientific integrity.
- While acknowledging that mistakes were made, he is also absolving—at least partially so—the STAR*D investigators of scientific misconduct. The problem with the STAR*D report wasn’t so much that the investigators violated the protocol in numerous ways that inflated the remission rate, but rather a “lack of transparency” in their reports about these deviations from the protocol.
- He is setting up a case that unnamed critics (e.g. MIA) are making “allegations” that go too far.
You might say there are three characters in that paragraph: the virtuous (him), the flawed (the STAR*D investigators), and the morally suspect (the critics.)
He then turns to the second “kind of discussion” that can be had, which is whether there was any clinical harm done by the false report of a 67% remission rate. He writes:
“How does this (reanalysis) change what we now know about antidepressants? My impression is . . . not much. We have more treatment options now than existed when STAR*D was planned and executed. We have a more somber assessment of the efficacy of traditional antidepressants in general, and partly as a result of STAR*D itself, attention has shifted onto ‘treatment-resistant’ depression. In this sense, the re-analysis, as important as it is for the integrity of the scientific record, has little impact on clinical decision-making and serves more as a cautionary tale of how yet another beloved research statistic doesn’t stand up to rigorous scrutiny.”
In this passage, Aftab is distancing today’s psychiatry from any connection to the STAR*D misdeeds. Indeed, he is using the STAR*D study as a foil to give today’s psychiatry a pat on the back. The field, it seems, has continued to hone its knowledge of the effectiveness of antidepressants and broadened its tools for treating depression. The march of progress in psychiatry continues, with STAR*D a distant blip in the past.
In short, it’s not a piece that will ruffle any feathers of his peers. There is no discussion of the larger scandal presented in the MIA Report, which is that the profession never sought to correct the scientific record after Pigott detailed the protocol violations that produced the inflated remission rate, and that it continued to peddle the 67% effectiveness rate to the public even after that research misconduct became known. Nor is there any discussion of whether the 2006 article published in the American Journal of Psychiatry that told of the 67% remission rate should be retracted.
That’s the ethical challenge presented to psychiatry today: should that 2006 article be retracted? Aftab’s post dodges that question. Yet, if the article is not retracted, and the public is not informed that the 67% remission rate was a result born of research misconduct, then psychiatry’s betrayal of the public continues. Aftab may dismiss the study as “two decades old,” but media still cite it today as evidence of the effectiveness of antidepressants.
And Now on to Whitaker: There Is Where You Can Find Real Deceit!
The second part of the STAR*D study consisted of a year-long maintenance trial, which focused on whether patients who had remitted could stay well and in clinical care during that extended period. However, the STAR*D investigators never reported those findings with any clarity, and it wasn’t until Pigott and colleagues published their 2010 paper that this bottom-line outcome became known. Although Aftab didn’t discuss this aspect of the STAR*D scandal, as he closed his piece he cited our reporting of the one-year outcomes as “misleading” and evidence of our “being deceptive.”
Here is what he wrote:
“In addition to the 67% figure, another misleading statistic is the 3% stay-well rate commonly cited in critical spaces such as Mad in America. Robert Whitaker, for example, frequently points out that only 3% of the patients in STAR*D remitted and were still well at the end of the one year of follow-up. This is misleading because, due to massive drop-outs, this doesn’t translate into an assertion that only 3% of patients with depression remain well in the year after treatment. According to the 2006 summary article, there were only 132 patients in the 9-12 months naturalistic follow-up time period. The extremely high rate of drop-outs essentially renders the 3% figure meaningless, and anyone who cites it with a straight face is being deceptive.”
Thus, Aftab is making two claims to his readers: First, that as I have reported on the one-year outcomes, I have failed to mention the dropouts. Second, that I assert that the study tells of how, in the real world, “only 3% of patients with depression remain well in the year after treatment.”
The simple way to review this charge is to revisit our recent MIA Report, which Aftab had read. There we can see, with great clarity, whether Aftab’s charge has merit, or whether he has told what might be described, if I let my temper get the best of me, as a “big fat lie.”
Our MIA Report
In the opening paragraphs of our report, I noted how Ed Pigott and colleagues, in their 2010 paper, had finally made sense of a graphic in the STAR*D summary paper that told of the outcomes at the end of the one-year follow-up. I wrote that Pigott and colleagues had found that “only 3% of the 4,041 patients who entered the trial had remitted and then stayed well and in the trial to its end.” (Emphasis added.)
The key part of that sentence, for purposes of this piece, is the phrase “and in the trial.” That is the part of the sentence that tells of how all the others in the trial either never remitted, remitted and then relapsed, or dropped out. This sentence is simply an accurate description of the trial results, as summed up by Pigott. What Aftab did, however, was omit the part of the sentence that includes “in the trial” and then, having omitted those words, accuse me of not mentioning the dropouts.
His second assertion is that I write about this one-year outcome as evidence of the long-term stay-well rate for patients treated with antidepressants in general. However, as can be seen in the MIA Report, my reporting on the STAR*D study tells of what the results were in this particular study, as reported by Pigott and colleagues. And that is this: Of the 4,041 patients who entered the study, there were only 108 who remitted and then stayed well and in the trial to its end. The point of my reporting is that the STAR*D investigators hid this poor one-year result, and how it stands in stark contrast to the 67% remission rate peddled to the public.
Thus, my reporting of this one-year outcome is focused on how the the STAR*D investigators and the NIMH—and by extension psychiatry as a profession—misled the public about the findings in this particular study. I am reporting on the scandal, and not on what evidence may exist in the research literature on long-term stay-well rates for patients treated with antidepressants.
However, that isn’t even the most telling refutation of Aftab’s accusation. What is most damning is that in our recent report we provided an exact count of the number who relapsed and of the number who dropped out during the maintenance phase of the trial, and that this was the first time that this exact count had ever been published. Thus, rather than fail in some way to accurately report on the results in the maintenance phase, we filled in the data that the STAR*D investigators had hidden in their summary report.
Here is the complete section from our recent MIA Report on the one-year outcomes:
One-year outcomes
There were 1,518 who entered the follow-up trial in remission. The protocol called for regular clinical visits during the year, during which their symptoms would be evaluated using QIDS-SR. Clinicians would use these self-report scores to guide their clinical care: they could change medication dosages, prescribe other medications, and recommend psychotherapy to help the patients stay well. Every three months their symptoms would be evaluated using the HAM-D. Relapse was defined as a HAM-D score of 14 or higher.
This was the larger question posed by STAR*D: What percentage of depressed patients treated with antidepressants remitted and stayed well? Yet, in the discussion section of their final report, the STAR*D investigators devoted only two short paragraphs to the one-year results. They did not report relapse rates, but rather simply wrote that “relapse rates were higher for those who entered follow-up after more treatment steps.”
Table five in the report provided the relapse rate statistics: 33.5% for the step 1 remitters, 47.4% for step 2, 42.9% for step 3, and 50% for step 4. At least at first glance, this suggested that perhaps 60% of the 1,518 patients had stayed well during the one-year maintenance study.
However, missing from the discussion and the relapse table was any mention of dropouts. How many had stayed in the trial to the one-year end?
There was a second graphic that appeared to provide information regarding “relapse rates” over the 12-month period. But without an explanation for the data in the graphic, it was impossible to decipher its meaning. Here it is:
Once Pigott launched his sleuthing efforts, he was able to figure it out. The numbers in the top part of the graphic told of how many remitted patients remained well and in the trial at three months, six months, nine months and one year. In other words, the top part of this graphic provided a running account of relapses plus dropouts. This is where the dropouts lay hidden.
Before Pigott published his finding, he checked with the STAR*D biostatistician, Stephen Wisniewski, to make sure he was reading the graphic right. Wisniewski replied: “Two things can happen during the course of follow-up that can impact on the size of the sample being analyzed. One is the event, in this case, relapse, occurring. The other is drop out. So the N’s over time represent that size of the population that is remaining in the sample (that is, has not dropped out or relapsed at an earlier time).”
Here, then, was the one-year result that the STAR*D investigators declined to make clear. Of the 1,518 remitted patients who entered the follow-up, only 108 patients remained well and in the trial at the end of 12 months. The other 1,410 patients either relapsed (439) or dropped out (971).
Pigott and colleagues, when they published their 2010 deconstruction of the STAR*D study, summed up the one-year results in this way: Of 4,041 patients who entered the study, only 108 remitted and then stayed well and in the study to its one-year end. That was a documented get-well and stay-well rate of 3%.
To summarize, this section of our report identifies the precise number of dropouts during the one-year follow-up study, information that the STAR*D investigators had hidden, and for the first time ever, reported on the precise number of relapses and dropouts in the maintenance phase. Yet, Awais Aftab, in his post on the STAR*D scandals, accuses me of misleading readers for failing to note the high dropout rate.
In making this accusation, Aftab exemplifies the very failure that is at the heart of the STAR*D scandal. The scandal is that psychiatry cannot be trusted to provide an honest accounting to the public of its research findings, and here Aftab demonstrates this same dishonesty. I provide, for the first time, an exact account of the outcomes for the 1,518 patients who entered the maintenance trial in remission, and yet he accuses me of misleading the public, and—with his “straight face” remark—suggests I am doing so with an “intent to deceive.”
Hence the title of this blog. The STAR*D scandal is about how psychiatry peddled a fabricated result to the public, and now Aftab, in his latest post, has added a whopper of his own to the STAR*D story.
Mad in the UK hosts blogs by a diverse group of writers. The opinions expressed are the writers’ own.