Musk claims moderation stifles free speech on Twitter. He’s wrong | #socialmedia | #education | #technology | #infosec


Elon Musk’s accepted bid to purchase Twitter has triggered a lot of debate about what it means for the future of the social media platform, which plays an important role in determining the news and information many people – especially Americans – are exposed to.

Musk has said he wants to make Twitter an arena for free speech. It’s not clear what that will mean, and his statements have fueled speculation among both supporters and detractors. As a corporation, Twitter can regulate speech on its platform as it chooses. There are bills being considered in the U.S. Congress and by the European Union that address social media regulation, but these are about transparency, accountability, illegal harmful content, and protecting users’ rights, rather than regulating speech.

Musk’s calls for free speech on Twitter focus on two allegations: political bias and excessive moderation. As researchers of online misinformation and manipulation, my colleagues and I at the Indiana University Observatory on Social Media study the dynamics and impact of Twitter and its abuse. To make sense of Musk’s statements and the possible outcomes of his acquisition, let’s look at what the research shows.

Political bias

Many conservative politicians and pundits have alleged for years that major social media platforms, including Twitter, have a liberal political bias amounting to censorship of conservative opinions. These claims are based on anecdotal evidence. For example, many partisans whose tweets were labeled as misleading and downranked, or whose accounts were suspended for violating the platform’s terms of service, claim that Twitter targeted them because of their political views.

Unfortunately, Twitter and other platforms often inconsistently enforce their policies, so it is easy to find examples supporting one conspiracy theory or another. A review by the Center for Business and Human Rights at New York University has found no reliable evidence in support of the claim of anti-conservative bias by social media companies, even labeling the claim itself a form of disinformation.

A more direct evaluation of political bias by Twitter is difficult because of the complex interactions between people and algorithms. People, of course, have political biases. For example, our experiments with political social bots revealed that Republican users are more likely to mistake conservative bots for humans, whereas Democratic users are more likely to mistake conservative human users for bots.

To remove human bias from the equation in our experiments, we deployed a bunch of benign social bots on Twitter. Each of these bots started by following one news source, with some bots following a liberal source and others a conservative one. After that initial friend, all bots were left alone to “drift” in the information ecosystem for a few months. They could gain followers. They acted according to an identical algorithmic behavior. This included following or following back random accounts, tweeting meaningless content and retweeting or copying random posts in their feed.

But this behavior was politically neutral, with no understanding of content seen or posted. We tracked the bots to probe political biases emerging from how Twitter works or how users interact.