Добавить новость
ru24.net
News in English
Апрель
2023

Answering Awais Aftab: When it Comes to Misleading the Public, Who is the Culprit?

0

On March 27, Mad in America published a review of a “viewpoint” in JAMA Psychiatry that told of how psychiatry did not have evidence of “successful outcomes.” This prompted psychiatrist Awais Aftab to write a blog critical of our science reporting, with the March 27 report exemplifying our failure. He led his blog with a quote describing “rationalisation markets,” which was meant to describe our “insidious impact”:

“Rationalisation markets provide a helpful framework for understanding why certain information can often be so misleading even when it is accurate. To the extent that pundits or media organization exist not to inform, but rationalize, their insidious impact often lies not in the strict falsity of their content but in the way in which it is integrated and packaged to support appealing but misguided narratives.”

Mad in America gets criticized all the time, and for the most part, we just ignore it. Aftab has criticized us before, and so this too is nothing new. However, his criticism in this instance provides us with an opportunity not to be passed up, as it illuminates why Mad in America’s science coverage is so threatening to psychiatry.

Aftab has staked out a position as being open-minded to critiques of psychiatry, and that is a public stance that makes him particularly valuable to his profession. He can serve as a defender of psychiatry against critiques that are truly threatening, and his criticisms will be seen as coming from someone who is open-minded about psychiatry’s flaws.

Mad in America certainly offers a critique that is threatening to psychiatry, as our mission statement makes clear.

“Mad in America’s mission is to serve as a catalyst for rethinking psychiatric care in the United States (and abroad). We believe that the current drug-based paradigm of care has failed our society, and that scientific research, as well as the lived experience of those who have been diagnosed with a psychiatric disorder, calls for profound change.”

Our mission statement tells of a failed paradigm of care, with that failure documented in the scientific and medical literature. This is an assertion that psychiatry cannot let stand.

As such, Aftab’s criticism—which states that our coverage has an “insidious impact” because it is “packaged” in a manner that supports a “misguided” narrative—provides us with the chance to ask this question: Is it Mad in America that misleads the public? Or is it psychiatry, as an institution, that is guilty of this sin?

There is a journalistic history that serves as the foundation for Mad in America’s science coverage, and from there it is easy to show that Aftab, in his criticism of us, is seeking to protect psychiatry’s narrative of progress—a narrative that arises from psychiatry’s guild interests, and not a faithful record of its own research literature.

The Journalistic Path to MIA

The first time I reported at any length on psychiatry was in 1998, when I co-wrote a series for the Boston Globe on the abuse of psychiatric patients in research settings. As I have noted before, at that time I had a conventional understanding of psychiatric drugs. I understood that they fixed chemical imbalances in the brain, and thus were like insulin for diabetes. I also understood that the second generation of psychiatric drugs—SSRIs and atypical antipsychotics—were much better than the first generation of psychiatric drugs.

I had come to that understanding based on traditional newspaper methods of reporting on science. I called up experts in the field, and reported what they told me. They were the ones who knew the science. However, while doing the reporting for that series, I stumbled on findings that belied that narrative of progress.

First, I read two World Health Organization (WHO) studies that found that schizophrenia outcomes were much better in three developing countries—India, Nigeria, and Colombia—than in the U.S. and five other “developed” countries. In the second study, the WHO investigators specifically looked an antipsychotic use, and reported that schizophrenia patients in the developing countries used the drugs acutely but not chronically—only 16% of the patients were regularly maintained on the drugs. This was a result that didn’t fit with psychiatry’s claim that all schizophrenia patients needed to stay on these drugs because they fixed a chemical imbalance in the brain.

Next, I found a 1994 study by Ross Baldessarini and his colleagues at Harvard that determined that recovery rates for schizophrenia patients in modern times had declined over the past 15 years and were now no better than they had been in the first third of the 20th century, when water therapies and other strange somatic treatments were standard practices. This finding did not tell of a field that was making great progress in treating “schizophrenia.”

Finally, after the Globe series was published, I asked leading psychiatrists—and researchers at one of the pharmaceutical companies that had brought a new atypical antipsychotic to market—a simple question: could they point me to the research that had shown that schizophrenia patients actually suffered from dopamine hyperactivity, with this imbalance then fixed by antipsychotic drugs? Here is what I was told:

“Well, we didn’t actually find that.”

This was the “aha” moment that transformed my approach to reporting on psychiatry and its “science.” In my interviews for the Boston Globe series, I had been repeatedly told that antipsychotics fixed a dopamine imbalance in the brain and thus were “like insulin for diabetes.” Yet, now that I had asked to see the evidence for that claim, I was informed that “like insulin for diabetes” was just a metaphor, and that it was useful because it helped schizophrenia patients understand why they should take antipsychotics. And I immediately thought this: It’s not my job as a reporter to peddle a false story to the public so that schizophrenia patients will take their drugs. The WHO results also made me wonder whether there was something wrong with the claim that continual use of antipsychotics led to better outcomes for those diagnosed with schizophrenia.

At that point, I got a contract to write a book, titled Mad in America, and I did so with this plan in mind: I would tell of a history that could be found in the scientific literature. Rather than rely on interviews with experts in the field, I would rely on research that could be found on library shelves.

That book traced the history of the treatment of the “seriously mentally ill” from colonial times until today. The controversial part was the history I told about antipsychotic drugs.

The conventional history of psychiatry tells of how the introduction of chlorpromazine into asylum medicine in the 1950s kicked off a “psychopharmacological” revolution, a great advance in care that made it possible for people diagnosed with schizophrenia to live in the community. However, the scientific literature told a different story. Chlorpromazine, which is remembered today as the first “antipsychotic,” was initially praised for providing a chemical lobotomy; the introduction of chlorpromazine didn’t, in fact, improve hospital discharge rates for first-episode schizophrenia patients; and a variety of studies in the 1960s and 1970s told of how it appeared that relapse rates—and functional outcomes—worsened with the introduction of these drugs.

Here is a sampling of such data that appears in the medical literature:

 

Then, there was this dark secret that could be found in the research literature: By the end of the 1970s, researchers had raised the possibility that antipsychotics induced a dopamine supersensitivity that made patients more biologically vulnerable to psychosis and thus increased the chronicity and severity of symptoms in a significant percentage of patients. The drugs also caused Parkinsonian symptoms, tardive dyskinesia, and a host of other horrible side effects, all of which led Jonathan Cole, the long-time director of the NIMH Psychopharmacology Service Center, to co-author a 1976 paper titled “Maintenance Antipsychotic Therapy: Is The Cure Worse than the Disease?”

Finally, I turned my attention to the second-generation “atypical antipsychotics” that came to market in the mid-1990s. Zyprexa and Risperdal were being touted as breakthrough medications, more effective and safer than the first-generation antipsychotics, with breathless stories appearing in major newspapers telling of how schizophrenia patients were now going back to work like never before. I had obtained the FDA’s reviews of those two drugs through an FOIA request, and those reviews told of how the clinical trials of these two drugs had been biased by design against the first generation, and how there was no evidence they were any better or safer than the old ones. The one difference, the FDA reviewers said, was that the nature of the side effects with the newer drugs could be expected to differ, in many ways, from the side effects with the older drugs.

At the end of that book,  I wrote briefly about what could be done to improve treatment of the “seriously mentally ill” in the United States. I concluded with this line:

“At the top of this wish list, though, would be a simple plea for honesty: Stop telling those diagnosed with schizophrenia that they suffer from too much dopamine or serotonin activity and that the drugs put those brain chemicals back into ‘balance.’ That whole spiel is a form of medical fraud, and it is impossible to imagine any other group of patients—ill, say, with cancer or cardiovascular disease—being deceived in this way.”

After that, I took a break from psychiatry. I wrote a history of the first scientific expedition from Europe to South America (The Mapmaker’s Wife), and a history of a racial massacre in Arkansas in 1919 and the legal struggle that it spawned that served as a foundation for the Civil Rights movement fostered through Supreme Court decisions (On the Laps of Gods). Then, in Anatomy of an Epidemic, I investigated this question: How do psychiatric drugs affect the long-term course of psychiatric disorders? What did a review of the scientific literature reveal?

This was a question that required a deep dive into the research literature. What long-term outcomes were reported for a major diagnostic category prior to the introduction of “psychiatric drugs”? When the drugs were introduced, did clinicians at that time notice any change in long-term course? What did longer-term studies reveal about the changing course? In longer-term studies that compared outcomes for medicated and unmedicated patients, which group had higher recovery rates? Did all these pieces of data piece together to paint a consistent, coherent picture?

With each class of drugs, the same bottom-line conclusion emerged: Psychiatric drugs worsened long-term outcomes compared to natural recovery rates. And at least a few researchers, confronted with data of this type, sought to provide a biological explanation for why antipsychotics, antidepressants, and benzodiazepines might have this negative long-term effect.

There was one aspect of turning to the research literature in this way that had its challenges. Much of the long-term research had been funded by the NIMH, and these studies were regularly authored by academic researchers who had ties to pharmaceutical companies and were known as prominent “thought leaders” in the field. The negative findings threatened psychiatry’s public story of great progress, and such findings wouldn’t be highlighted in the abstract, or else they would be explained away. The same was true in the discussion section of the articles. There was regularly this effort to spin the results so that the drugs wouldn’t be seen as doing harm, and as a result, I learned to focus on the data tables and graphics. What did the data say?

We founded Mad in America in January of 2012, and our science coverage was meant to continue the reporting that was present in those two books: we would provide a running account of studies that belie the common wisdom but are never promoted by psychiatry for that reason.

With this in mind, we set forth a template for reporting on research:

  • We identify the authors, detail the study’s methodology and limitations, and report on the principal findings.
  • We raise questions: What are the possible implications of the findings—why are the findings important?
  • We put the findings into a broader context: Do the results from this one study fit within a larger body of related research? We provide links to earlier studies that provide this context.
  • We assess the presence of spin: Is spin present in the article that is designed to protect the conventional wisdom?

In sum, our reporting on science isn’t meant to serve simply as a collection of reports on individual studies. We provide reviews that help readers see the “bigger picture” that exists in the research literature, and yet isn’t generally known. Indeed, if you go to our archive of science news for drugs, you’ll find reviews of more than 650 peer-reviewed articles, and as you peruse the headlines for these articles, you will immediately see that very little of this research is communicated to the public. I am quite certain there is no other media archive of drug findings like it anywhere else in the world.

Our Review of the JAMA Psychiatry “Viewpoint”

The principal findings

In their article titled “Success Rates in Psychiatry,” the authors, Kenneth Freedland and Charles Zorumski from Washington University School of Medicine, defined “successful outcomes” in this way:

“Successful outcomes include both the prevention of undesirable events, such as death and disability, and the achievement of desirable ones, such as remission.”

This is a definition that doesn’t just look at short-term RCTs as evidence of a treatment’s efficacy, but rather at a bigger picture: how does the treatment affect the patients’ lives and their ability to function over the long-term?

In other medical disciplines such as cardiology and cancer, the authors noted, researchers have been able to document how the introduction of new therapies decreased mortality rates. This is bottom-line data that tells of an improvement in “successful outcomes” for those diseases. But psychiatry, they wrote, does not have such “successful outcome” data.

“Despite advances in measurement-based psychiatric care, clinical [success rate] reporting systems do not exist for most psychiatric services. This applies to all psychiatric treatments including pharmacotherapy, psychotherapy, and neuromodulation.”

The fact that psychiatry has never reported on such success rates, they concluded, makes it “difficult to determine whether psychiatric treatment outcomes are improving over time, stagnating, or perhaps even regressing.”

Moreover, this is the very information that the public wants. “Patients with serious illnesses care about their chances of having successful treatment outcomes. They also expect to receive more effective treatments than the ones that were available to their parents or grandparents, and they hope that even more effective treatments will be available for their children and grandchildren.”

Hence, the principal elements summarized in our headline and subtitle:

Title: Jama Psychiatry: No Evidence that Psychiatric Treatments Produce “Successful Outcomes.”

Subtitle: In a viewpoint article in JAMA Psychiatry, researchers reveal that psychiatry is unable to demonstrate improving patient outcomes over time.

The implications

The authors stated that patients today want to know if treatments today are better than they were in the past, and that without success outcomes data, there is no way to know this. Our review turned that conclusion into a concrete question: Are outcomes today better than in the era of lobotomies, insulin coma therapies, and other harsh somatic treatments? Or better than they were in the early 1800s, when Quakers created moral therapy asylums in the United States? We raised the question, which was consonant with the authors’ discussion of how “success outcomes” data was necessary to make such assessments. It was an obvious question for us to ask.

The bigger picture

Although the authors state there is a lack of “success outcome” data for psychiatric treatments, there is in fact much data that is available that is relevant to this question. There is data that tells of rising disability rates due to mental disorders in the modern era. Standard mortality rates for schizophrenia and bipolar patients have worsened in the past 40 years. Long-term studies tell of higher recovery rates for schizophrenia patients off medication. There is evidence of many kinds that tells of how depression has been transformed from an episodic disorder into a chronic condition in the Prozac era. Public health data tells of a mental health crisis in our country today, despite the dramatic increase in the percentage of the population receiving treatment.

All of this tells of “psychiatric treatment outcomes” that are regressing. In our report, we briefly referred to several studies of this kind.

The spin

As our report noted, the authors wrote of how the assessment of “successful outcomes” is necessary to determine whether medical care in a discipline is improving. Given that psychiatry has not gathered “successful outcome” data, then it would logically follow that psychiatry cannot claim a record of improved outcomes. But having made this argument in the first two sections of their viewpoint, in the third section the authors pivoted in a way that made their paper palatable to the psychiatric profession. The head of the third section reads: Using Success Rate Data to Accelerate Progress.

With this pivot, the authors were fitting their “viewpoint” into the narrative of progress that psychiatry has told to itself and the public. This is a discipline that has made progress in its treatments of major disorders; what it needs to do now is develop systems that can measure this progress, which in turn will spur further progress. They wrote, in conclusion:

“The development of well-designed, sustainable success rate data systems would facilitate this kind of progress and help ensure that psychiatric treatment outcomes continue to improve in the decades ahead.”

This “continue to improve” statement doesn’t logically follow from their conclusion that psychiatry lacks “success outcomes” data that would enable the field to assess whether outcomes are improving, stagnating, or regressing over time. And so, in our report, we called out this spin.

The authors’ conclusion, we wrote, “suggests that psychiatric treatments have been shown in the past to lead to successful outcomes; yet, as they write here, there is no data on whether medical treatments for psychiatric disorders, past or present, produce that bottom-line result.”

Aftab’s criticism

In essence, Aftab’s criticism of our report boils down to this: He wanted us to report on the paper without evaluating the implications of the authors’ assertion that psychiatry hasn’t gathered any data on “successful outcomes;” without linking to data that told of poor outcomes with psychiatric drug treatments today; and without pointing out the spin in the last section of the viewpoint.

Here is his take:

This is a story of progress

He first summarizes the key points of the JAMA Psychiatry “viewpoint,” and says that he agrees with the authors. By charting “success rates,” he concludes, “we’ll be able to track the progress we’ve made, and we’ll have a better idea of where progress is needed.”

How dare we question this story of progress

After criticizing our headline, Aftab takes issue with our questioning—based on the absence of success outcomes data—of whether there is evidence that present treatments produce better outcomes than in the pre-drug era, or during the early 1800s, when the Quakers introduced moral therapy. He writes:

“This is a pretty wild extrapolation! A call for implementation of temporal tracking of psychiatric success rates is being interpreted here to suggest that we cannot say with any confidence that outcomes of psychiatric care are better now than they were in the 1800s. Just because we haven’t tracked temporal outcomes in the specific manner suggested by Freedland and Zorumsky doesn’t mean that we cannot make reasonable inferences from existing RCTS, observational data and clinical experience. Not only do we have multiple treatment modalities (pharmacotherapies, psychotherapies, neurostimulation, lifestyle modifications, community services, etc.) and multiple treatments within each modality that have demonstrated efficacy in RCTS, we can combine these treatments as well as use them sequentially to increase response and remission rates. This is not being disputed by Freedland and Zorumski, who state “stepwise approaches can produce cumulative success rates that are considerably higher than their constituent ‘specific success rates’ [for each treatment]and cite the results of STAR*D in support.”

Studies we cited that tell of treatments that worsen outcomes are not to be taken seriously

Aftab writes: “There is of course no acknowledgement to the casual reader that these assertions presented as facts here are highly controversial claims with little acceptance in the scientific community.”

Let’s Go to the Scientific Literature

Aftab’s criticism of our reporting illuminates a fundamental question: Should society consider psychiatry to be a faithful recorder of its own scientific history, as it tells of a history of progress in the field? Or is there a gap between that public story of progress and the history that is told in the research literature, which is the very belief that has animated my reporting on psychiatry for the past 25 years, and was a motivation for founding Mad in America and is present in our mission statement?

We could of course cite the chemical imbalance story as illustrative of this gap, but let’s keep it within the confines of this JAMA Psychiatry “viewpoint” on “Successful Outcomes” and Aftab’s criticism of our report.

Disability due to mood disorders

Freedland and Zorumski state that “successful outcomes” data should consider such outcomes as disability and mortality rates. Okay, let’s look at disability rates due to mood disorders following the introduction of SSRIs, starting with Prozac in 1988. Here is the data on disability I prepared when I was asked to present to the UK Parliament on this topic in 2017:

 

In sum, disability rates notably climbed in country after country with increased use of antidepressants.

Standard mortality rates for psychiatric patients

Standard mortality rates (SMRs) tell of the higher mortality rates for patient groups compared to the general population. For instance, a standard mortality rate of 2 for schizophrenia patients means that they are twice as likely to die over a set period than the general population. SMRs for schizophrenia and bipolar patients have worsened over the last 50 years.

In 2007, Australian researchers conducted a systematic review of published reports of mortality rates of schizophrenia patients in 25 nations. They found that the SMRs for “all-cause mortality” rose from 1.84 in the 1970s to 2.98 in the 1980s to 3.20 in the 1990s.

Here is a summary of the increase in SMRs for the seriously mentally ill from various studies:

In 2017, UK investigators reported that the SMR for bipolar patients had risen steadily from 2000 to 2014, increasing by 0.14 per year, while the SMR for schizophrenia patients had increased gradually from 2000 to 2010 (0.11 per year) and then more rapidly from 2010 to 2014 (0.34 per year.) “The mortality gap between individuals with bipolar disorders and schizophrenia, and the general population, is widening,” they wrote.

Long-term use of antidepressants has also been found to be associated with increased morbidity and mortality.

The long-term course of depression

There is abundant evidence that depression has been transformed from an episodic disorder into a chronic condition in the antidepressant era. Here is a sampling of graphics from studies that assessed longer-term outcomes for depressed patients treated with and without antidepressants.

In a retrospective study of the 10-year outcomes of 222 people who had suffered a first episode of depression, Dutch researchers reported that 76% of those not treated with an antidepressant recovered and never relapsed, versus 50% of those initially prescribed an antidepressant.

In a WHO study designed to assess the merits of screening for depression, which was conducted in 15 cities around the world, the patients who were diagnosed by their GPs and treated with an antidepressant were twice as likely to be depressed at the end of one year as those who weren’t diagnosed and treated, even though their baseline depression scores were nearly the same.

In a NIMH-funded study, investigators assessed the six-year “naturalistic” outcomes of 547 people who suffered a bout of depression, and found that those who were treated for the illness were three times more likely than the untreated group to suffer a “cessation” of their principal social role, and nearly seven times more likely to become incapacitated.

In a Canadian study that charted outcomes for 9,508 depressed patients for five years, those taking antidepressants were depressed on average 19 weeks per year, versus 11 weeks for those not taking antidepressants.

The next graphic provides a comparison of one-year remission rates from three studies of real-world patients (Rush study; STAR*D study; and depressed patients in Minnesota), and a NIMH-funded study of unmedicated patients.

More recently, a study that charted outcomes for depressed patients at nine years, and a second one that did so for 30 years, found worse outcomes for those who took antidepressants for long periods of time.

Recovery rates for schizophrenia patients have declined

The best long-term prospective study of schizophrenia outcomes was the NIMH-funded Chicago Follow-up Study conducted by Martin Harrow and Thomas Jobe. They found that recovery rates at 15 years for those who had stopped taking their antipsychotic medication were eight times higher than for those on antipsychotic medication. In a 2018 paper, Harrow and Jobe noted there were now seven other studies “assessing whether schizophrenia patients improve when treated longer than two-three years with antipsychotic medication . . . These research programs included samples studied from 7 to 20 years. Unlike short-term studies, none of them showed positive long-term results” for the medicated patients.

As noted earlier, I stumbled upon a 1994 paper by Ross Baldessarini that told of how recovery rates for schizophrenia patients had declined from 1976 to 1995. A 2013 systematic review of recovery rates after the atypical antipsychotics came to market in the 1990s found that recovery rates continued their decline, and since 1996 are now lower than they were in the first third of the 20th century.

Outcomes with moral treatment

Case reports of the “insane” patients admitted to moral therapy asylums in the first half of the 19th century tell of seriously disturbed patients. Here are the outcomes reported at that time from three prominent “moral treatment” asylums, which are far better than the 6% recovery rate for schizophrenia patients today.

STAR*D as Evidence of “Cumulative Success Rates”

In their JAMA Psychiatry paper, the authors write that psychiatry should track “cumulative success rates,” meaning the clinical outcomes for patients who may be treated with multiple forms of treatment and trials on various psychiatric drugs (as opposed to the effectiveness of a single treatment). They cite the STAR*D study as an example of a study that shows that the cumulative success rate may be much higher than “success outcomes” with a single form of treatment. In his blog, Aftab refers to this passage and the STAR*D study as evidence of psychiatry’s progress in producing good outcomes.

The STAR*D investigators, in their reports on remission rates in the trial, which was the largest antidepressant trial ever conducted, did claim such success. There were 4,041 “real-world” patients enrolled into the trial, and if they didn’t remit on a first antidepressant, they could be switched to a different antidepressant and also receive psychotherapy, and ultimately they could be given four different concoctions of treatment to see if they could find one that led to a remission of their depression. The STAR*D investigators reported that at the end of the four steps, the cumulative remission rate was 67%.

That finding is still cited today by the mainstream media as evidence of the effectiveness of antidepressants, and they do so after speaking with experts in the field. The New York Times, for instance, cited this finding to reassure readers after Joanna Moncrieff and colleagues reported in 2022 that there was no evidence for the low serotonin theory of depression. There was no need to worry; the STAR*D study showed that antidepressants fully chased away depression in two-thirds of patients.

As Aftab surely knows, nothing like that actually happened in the STAR*D study. Indeed, this study stands out as the best example of how psychiatry, in its communications to the public, cannot be trusted. The STAR*D study is a story of scientific fraud.

The investigators went through any number of machinations—deviations from the protocol—to produce an inflated remission rate. Patients who dropped out early and should have been counted as treatment failures weren’t counted. The investigators switched from the depression rating scale that was supposed to be used to evaluate outcomes (the HAM*D scale) to a secondary one (QIDS-SR scale) that produced higher rates. They then calculated a “theoretical” remission rate: If all patients had stayed in the trial throughout the four steps, rather than progressively dropping out during the study while still depressed, and if these drop-outs had remitted at the same rate as those who stayed in the trial, then two-thirds would have eventually gotten well. This calculation alone turned 606 treatment failures into treatment successes.

A team of independent researchers subsequently obtained access to the case report data, and they found that the actual “remission” rate in the study was 26%, rather than the 67% rate the STAR*D investigators promoted to the public (which is still promoted to the public)! Aftab, in his blog criticizing us, similarly cites this fraudulent study as evidence of psychiatry’s cumulative prowess in treating depression.

But even this isn’t the end of the duplicitous aspect of the STAR*D study. The purpose of the study, which was touted by the NIMH before the results came in as the research that should guide clinical practice, was to see if psychiatry, with its various treatments, could get patients well and keep them well. In one of their reports, the STAR*D investigators did publish a table on the one-year outcomes, but it was nearly impossible to decipher, and they did not detail the one-year results in their written text. However, psychologists Ed Pigott and Allan Leventhal subsequently determined that the one-year table told of how only 108 patients—out of the 4,041 who had been enrolled—remitted and then stayed well and in the trial to its one-year conclusion.

Here is a graphic of the one-year results from the STAR*D study:

The Scientific Literature Has Spoken

As can be seen from even this brief review of the research literature, it doesn’t support a narrative of therapeutic progress, of psychiatric treatments that have “continued” to improve over time. Aftab, in his criticism of our science coverage, asserted that there was plenty of evidence from RCTs and such that told of the effectiveness of psychiatric treatments and that it was ridiculous to think that outcomes were not much better than they had been in the early 1800s with moral therapy. He described our links to studies telling of how antipsychotics and antidepressants worsened long-term outcomes as “highly controversial claims with little acceptance in the scientific community.”

I don’t know about acceptance within the “scientific community,” but I do agree that within the psychiatric community, this research is mostly derided, ignored, and kept from the public. The reason is that this research, which is their research and voluminous in kind, belies the narrative of progress that psychiatry has told to itself and to the public, and in order to maintain that narrative, it has to keep such research hidden from the public, or to dismiss it as insignificant. When MIA continues to make it known to the public through our science coverage, the same impulse of institutional self-preservation takes hold. We must be portrayed as “untrustworthy”—it’s the only way that psychiatry can protect the narrative it holds so dear.

And that’s what Aftab’s blog, when it is deconstructed, makes clear.

The post Answering Awais Aftab: When it Comes to Misleading the Public, Who is the Culprit? appeared first on Mad In America.




Moscow.media
Частные объявления сегодня





Rss.plus
















Музыкальные новости




























Спорт в России и мире

Новости спорта


Новости тенниса