Joseph Mercola, a leading anti-vaccine advocate whose screeds have been restricted by YouTube and Facebook, warned this month that the unvaccinated could soon be imprisoned in government-run camps. The previous week, he had circulated a study claiming to use government data to prove that more children had died from COVID-19 strokes than from the coronavirus itself.
Shut down by major social media platforms, Mercola has found a new way to spread these debunked claims: on Substack, the subscription-based newsletter platform that’s increasingly a hub for controversial and often misleading perspectives on the coronavirus.
Substack, which Center for Countering Digital Hate researchers say makes millions from anti-vaccine misinformation, last week defended its tolerance for publishing “writers we strongly disagree with.”
Figures known to spread misinformation, such as Mercola, have flocked to Substack, podcasting platforms and a growing number of right-wing social networks over the past year after being kicked out or restricted on Facebook, Twitter and YouTube .
Today, these alternative platforms are starting to face some of the scrutiny that has put social media services at risk. But there is a fundamental difference in the architecture of newsletters and podcasts compared to that of social media companies. Social networks use algorithms to deliver content — sometimes misinformation — to users who don’t want to see it. Newsletters and podcasts do not.
These new platforms target subscribers who are looking for specific content that matches their views, potentially making the services less responsible for spreading harmful opinions, some disinformation experts say. At the same time, the platforms are exposing tens of thousands of people to misinformation every month — content that can potentially lead people to engage in behaviors that put themselves and others at risk.
Former Trump adviser Stephen Bannon, who was kicked out of Spotify in 2020, used his popular podcast, available on multiple platforms, to spread violent rhetoric and misrepresentations about the election in the weeks leading up to the presidential seat. US Capitol on January 6, 2021. .
Substack, founded in San Francisco in 2017, is part of a growing line of subscription-based services whose mission is to help creators, authors and other influencers get paid to build more intimate relationships with a dedicated audience.
Readers pay monthly to subscribe to a certain author, and the author keeps 90% of the revenue, while Substack takes 10%. The subscription model has become so popular that Twitter recently launched a subscription service and Facebook introduced plans for subscription-based paid newsletters for authors and creators.
Mercola has been banned from YouTube and its content has been restricted on Facebook. He uses his remaining public channels — like Twitter — to direct people to a “censored library” of articles he publishes in his newsletter, which is one of the top 20 political newsletters on Substack.
Mercola did not respond to a request for comment.
This type of content is “so bad no one else will host it,” said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that focuses on countering misinformation and has made research on Substack.
By splitting subscription profits with creators, the group estimates that Substack generates at least $2.5 million in revenue annually from just five anti-vaccine leaders who have amassed tens of thousands of subscribers, each paying $50. dollars per year.
Substack declined to comment, but shortly after The Washington Post inquired, CEO Chris Best and his two co-founders published a blog post saying that supporting “the presence of writers we strongly disagree with” was a “necessary prerequisite for creating more trust”. in the information ecosystem as a whole.
Facebook groups and other closed forums have long been plagued by misinformation as they are essentially echo chambers in which users share similar views, experts say, and newsletters face similar issues.
They can make like-minded people more radical in their beliefs. And a popular newsletter can be picked up and amplified by other outlets, as well as passed on to others.
From the start, social media companies have taken a hands-off approach to controlling content. Only posts directly advocating violence or breaking the law have been removed. But Silicon Valley companies like Facebook, YouTube and Twitter have changed their approach over the past four years in response to controversies, including the use of their services for online bullying and misinformation.
They have developed policies that control many forms of harmful material, including banning misinformation about the coronavirus, and have hired small armies of moderators who analyze content and remove what breaks the rules. They also work with fact checkers who help companies label inaccurate content.
The rules that social media companies have devised for advertising are even stricter because companies don’t want to be seen as profiting from hate and other social ills.
Yet disinformation is creeping in and proliferating.
Substack, on the other hand, operates to standards that resemble early social media companies. Chief executive Best said he wanted to create a platform to “challenge conventional wisdom”, where “dissent is allowed”.
Best has even made a point of contrasting his business model with that of social media companies, saying that the goal of companies like Substack is to allow people to “take back” their minds from their social media feeds, which he calls ” amplifying machinery. ”
Joan Donovan, research director of the Technology and Social Change Project at the Shorenstein Center on Media, Politics and Public Policy, said the attitude of companies like Substack would only invite closer scrutiny.
“Openness is easily exploited, so a lack of policy means the brand’s reputation will be damaged whenever there is a major scandal,” she said. “Substack’s brand will be tied to its most controversial creators.”