Blame it on Bots: How Anger Against Jack Dorsey's 'Brahamnical Patriarchy' Poster Was Stoked
Blame it on Bots: How Anger Against Jack Dorsey's 'Brahamnical Patriarchy' Poster Was Stoked
Jack Dorsey recently appeared to endorse a ‘Smash Brahamnical Patriarchy’ poster and Twitter went berserk with trolls. Bots allegedly shaped the conversation on the microblogging site.

New Delhi: After Twitter CEO Jack Dorsey found himself in the middle of controversy, bots allegedly went at it, spreading further misinformation and adding fuel to the fire. This, in fact, is at the heart of how Twitter can spread misinformation, a new research has found.

Jack Dorsey recently appeared to endorse a ‘Smash Brahamnical Patriarchy’ poster and Twitter went berserk with trolls. Bots allegedly shaped the conversation on the microblogging site. Researchers, in a study published in Nature Communications on Tuesday, were among the first to provide solid evidence on the role that bots play in spreading fake news and guiding conversations.

In the case of Dorsey, the message was twisted and many were ‘informed’ that the Twitter CEO had made anti-Hindu statements. The research argues that automated Twitter accounts – or bots – spread bogus articles (for instance, during the 2016 US elections) through a process that makes these articles seem more popular than they are. The popularity, in turn, makes users believe that the information is credible.

Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington and his colleagues analyzed 13.6 million Twitter posts from May 2016 to March 2017. These messages were all linked to articles on sites that are known to routinely publish ‘fake news’. The team then used a computer programme that had learnt to recognize bots by studying different accounts to determine the likelihood that the account in the dataset was a bot.

The study argued, “If the problem is mainly driven by cognitive limitations, we need to invest in news literacy education; if social media platforms are fostering the creation of echo chambers, algorithms can be tweaked to broaden exposure to diverse views; and if malicious bots are responsible for many of the falsehoods, we can focus attention on detecting this kind of abuse.”

How Do Bots Work?

A strategy used by bots often is to heavily share and promote an article that has low credibility almost immediately after it is published. This creates a mirage that the article and its views have popular support. This encourages human users to trust and share the article or information.

For instance, researchers found that a few seconds after a viral story had appeared on Twitter, at least half the accounts that shared it were bots. After the story had been around for two minutes, most users sharing it were real people.

A second strategy is by using those people with many followers, either by mentioning those people, or specifically replying to their tweets with posts that have links to “low-credibility content”. Even if one article gets a retweet, the content gets boosted and widely shared.

The study explained, “A possible explanation for this strategy is that bots (or rather, their operators) target influential users with content from low-credibility sources, creating the appearance that it is widely shared. The hope is that these targets will then reshare the content to their followers, thus boosting its credibility.”

“People are vulnerable to these kinds of manipulation in the sense that they retweet bots who post low-credibility content almost as much as they retweet other humans. As a result, bots amplify the reach of low-credibility content, to the point that it is statistically indistinguishable from that of fact-checking articles,” the study added.

Implications of the Study?

What these findings mean is that by shutting down bot account, it is effectively possible to curb the circulation of information or links that have little or no credibility. The team simulated a version of Twitter and found that by just removing 10,000 accounts, it was possible to cut down retweets that shared false information by almost 70 percent.

The problem though remains that often it is hard to differentiate between a human account and a bot. The solution, suggested by the study, is to require Twitter accounts to complete a captch account to prove that they’re not a robot before posting a message.

“While platforms have the right to enforce their terms of service, which forbid impersonation and deception, algorithms do make mistakes. Even a single false-positive error leading to the suspension of a legitimate account may foster valid concerns about censorship. This justifies current human-in-the-loop solutions which unfortunately do not scale with the volume of abuse that is enabled by software,” the study added.

What's your reaction?

Comments

https://popochek.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!