Alternative Social Network Minds Wants to Deradicalize — Not Deplatform — Extremist Users

'If your goal is deradicalization and to actually have a positive impact on global discourse, you cannot be pro-censorship,' said the company's CEO


Disclosure Notice: Ian Crossland, a co-founder of Minds, is an employee at Timcast

A new research paper from Mindsan alt-tech, blockchain-based social network, challenges the notion that people with extreme views should be removed from social media.

The research team argues the current practice of deplatforming pratice ultimately drives more people to violent extremism and kills any possibility of deradicalization.

The Censorship Effect” was published to analyze “the adverse effects of social media censorship and proposes an alternative moderation model based on free speech and Internet freedom.” The paper is part of the platform’s Change Mind initiative, a push to prioritize engagement with controversial views rather than sanitizing them from the internet.

Minds CEO and co-founder Bill Ottman has a vision for reimagining social media’s approach to radical content.

“None of these sites have consistent policies. There [are] no principles,” Ottman, also one of the paper’s authors, told Timcast. “They seem to just be responding to whatever the whims of whatever is currently trending or popular.”

A proponent of free information, the tech executive said major tech platforms consistently miss opportunities to deradicalize users by removing them from their sites.

“We need to convince the left, particularly, that if your goal is deradicalization and to actually have a positive impact on global discourse, you cannot be pro-censorship,” Ottman said. “I think it is foundationally necessary to have access to as much information and speech as possible in order to have the maximum impact and to be able to make informed decisions.”

The Change Minds team contends that through research and data analysis, they can prove free speech environments can lead to deradicalization “which is a fair assumption because if you ban those people there is no way you are ever going to deradicalize them because you just banned them,” according to Ottman.

Deplatforming controversial or radical users — either by suspending or banning their accounts — is sometimes considered a necessary step in preventing the spread of false or misinformation. However, Ottman says this claim is really in a research “gray-area” and could be considered a misinterpretation.

“Yes, censorship can limit the reach of certain individuals or topics in an isolated network,” Ottman said. “Obviously, if Twitter bans Trump, then Trump’s reach on Twitter individually is reduced.”

But a ban — especially of a major figure — can also increase interest and conversations on the platform regarding the now-banned user, causing more virality.

For users who do not have a significant following, being banned can drive them to another platform and motivate them to become more radicalized, Ottman says.

Ottman and his team believe the way to compel larger platforms to write their user policies based on data is by proving the effectiveness of their strategy through research. 

“You would think that they would want data to back up the fact that censorship is actually better for the world,” Ottman said. Without empirical data to direct their actions, Ottman believes it is fair to assume big tech is governing its deplatforming policies ideologically. 

The authors of the “Censorship Effect” paper stress the importance of distinguishing between “cognitive radicalization” or holding extreme beliefs and “violent extremism” or posing a clear threat, displaying clear support, or engaging actively.

“In science, it is crucial to understand that correlation does not equate to causality,” the study’s authors write. “While many violent extremists or terrorists radicalized, at least in part, online and often engage with radical or violent extremist content over social media, there is a difference between radicalizing while using social media and being radicalized by social media.”

With the goal of deradicalizing as many users as possible, the Change Mind initiative concentrates on positive intervention and voluntary dialogue.

Minds worked with Daryl Davis to develop a functional strategy to confront extreme views on the platform rather than just deleting the content by removing the users.

Davis, an African-American recording artist, describes himself as a race relations expert. He wrote the book Klan-Destine Relationships about his experience establishing relationships with members of the Ku Klux Klan to understand the roots of racial bias. He reports he opened conversations by asking “How can you hate me when you don’t even know me?” and has convinced at least 200 people to leave the KKK.

“Daryl proved that by befriending neo-Nazis and KKK members… that is how you result in deradicalization,” Ottman said. “You don’t bully somebody out of ideology. You have to listen to them and treat them as a human for there to be any chance to change.”

Davis helped train moderators at Minds to approach their jobs from a foundation of deradicalization and compassion, prioritizing a desire to connect over a sense of being triggered. Ottman stressed the importance of ensuring moderators on tech platforms are qualified and prepared to engage with graphic or radical content. 

Ottman says he envisions those involved with the initiative as people prepared to have conversations with users expressing an array of radical content, from racial superiority to suicide and self-harm.

“A lot of people just need someone to talk to,” Ottman said.

Internal Facebook documents published in October 2021 indicated the platform devoted a significant amount of financial resources to managing content deemed to be dangerous.

A 2019 post titled “Cost control: a hate speech exploration” explored how the company could reduce its opening on hate-speech moderation.

“Over the past several years, Facebook has dramatically increased the amount of money we spend on content moderation overall,” the post states. “Within our total budget, hate speech is clearly the most expensive problem: while we don’t review the most content for hate speech, its marketized nature requires us to staff queues with local language speakers all over the world, and all of this adds up to real money.”

Without including a specific amount, the report indicated 74.9% of expenses were incurred as a reactive cost while proactive efforts were 25.1% of Facebook’s hate-speech spending.

“Imagine if you took even half of those resources and put them toward positive intervention — people who aren’t going around banning users but who are around and reaching out to them and who are actually trying to create dialogue,” Ottman said. “I think you would see a massive impact. People would actually feel respected, they wouldn’t feel victimized.”

In addition to using their moderators to intervene rather than purge content, Minds uses “Not Safe For Work” filters to manage how openly accessible explicit or radical content is on the platform.

Ottman said the platform relies on community reporting. Once the content is reported, it is put behind a filter instead of being removed. 

Minds users can also build their own algorithm and selected what content they are interested in viewing. 

Ottman said his team will continue to study the progress of their deradicalization efforts for the next ten years. One of the metrics they will monitor is the number of users who chose to include opposing viewpoints in their newsfeeds.

While Ottman did not give explicit details on how Minds identifies users it considers radical, he said that the company is careful to protect users’ privacy and does not collect data in a way that could later be sold.

To Ottman, Minds is uniquely positioned to initiate an anti-censorship movement among tech platforms because it prioritizes neutrality more than other alternatives to larger social media sites.

“I don’t think that the red version of the blue site is the right move,” Ottman said. “I don’t think that they are approaching this in a long-term sustainable way.”

Gettr, TRUTH, and Parler are only attracting like-minded users and lack transparency, the executive said. Ottman argues that because the platforms are not open-source, there is no way to independently determine if they truly operate any differently from Facebook and Twitter.

Ottman called Gab “an interesting case” because while the platform is open-source it was very “conservative focused … almost religious.”

“They are being good in terms of transparency but I think that they are also inaccessible to the left,” Ottman said. “There is no way anyone on the left would join Gab and feel welcome.”

Ultimately, Ottman hopes his team’s research and Minds’s moderation policies will challenge large tech companies to rethink if banning users they consider to be extreme is the best course of action. If the company’s data proves that censoring extreme users is more likely to lead to extremist violence, big tech may have to consider the notion that engaging in the debate has more positive results than canceling unwanted views. 

Human Events Content recommendations!
Human Events recommendations!