Skip to main content

Rise of the [Ro]Bots

To spread misinformation like wildfire, bots will strike a match on social media and then urge REAL people to fan the flames...and boy, were they fanned to dramatic effect only quite recently...



by
Business development, social media & strategy
 

This much we know. Automated Twitter accounts – called bots - helped spread bogus articles during and after the 2016 U.S. presidential election by making the content appear popular enough that human users would trust it and share it more widely – so said Nature Communications, November 2018. It's one possible reason why that utter buffoon is in the Whitehouse right now. Ironically, Trump often touts “fake news” as some sort of conspiracy against any negative PR he receives. Message to @realdonaldtrump – you can’t have it both ways Donald!

Sorry, I digress...I'll alight from my soap-box. Although people have often suggested that bots help drive the spread of misinformation online, this study from NC is one of the first to provide solid evidence for the role that bots play. The finding suggests that cracking down on devious bots may help fight the fake news epidemic.

Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington, and colleagues analysed 13.6 million Twitter posts from May 2016 to March 2017. All of these messages linked to articles on sites known to regularly publish false or misleading information. Menczer’s team then used Botometer, a computer program that learned to recognise bots by studying tens of thousands of Twitter accounts, to determine the likelihood that each account in the dataset was a bot.

Unmasking the bots exposed how the automated accounts encourage people to disseminate misinformation. One strategy is to heavily promote a low-credibility article immediately after it’s published, which creates the illusion of popular support and encourages human users to trust and share the post. The researchers found that in the first few seconds after a viral story appeared on Twitter, at least half the accounts sharing that article were likely bots; once a story had been around for at least 10 seconds, most accounts spreading it were maintained by real people.

“What these bots are doing is enabling low-credibility stories to gain enough momentum that they can later go viral. They’re giving that first big push,” says VS Subrahmanian, a computer scientist at Dartmouth College not involved in the work.

The bots’ second strategy involves targeting people with many followers, either by mentioning those people specifically or replying to their tweets with posts that include links to low-credibility content. If a single popular account retweets a bot’s story, “it becomes kind of mainstream, and it can get a lot of visibility,” Menczer says.

These findings suggest that shutting down bot accounts could help curb the circulation of low-credibility content. Indeed, in a simulated version of Twitter, Menczer’s team found that weeding out the 10,000 accounts judged most likely to be bots could cut the number of retweets linking to shoddy information by about 70 percent.

Bot and human accounts are sometimes difficult to tell apart, so if social media platforms simply shut down suspicious accounts, “they’re going to get it wrong sometimes,” Subrahmanian says. Instead, Twitter could require accounts to complete a captcha test to prove they are not a robot before posting a message.

Suppressing duplicitous bot accounts may help, but people also play a critical role in making misinformation go viral, says Sinan Aral, an expert on information diffusion in social networks at MIT not involved in the work. “We’re part of this problem, and being more discerning, being able to not retweet false information, that’s our responsibility,” he says.

Bots have used similar methods to manipulate online political discussions beyond the 2016 U.S. election, as seen in another analysis of nearly 4 million Twitter messages posted in the weeks surrounding Catalonia’s bid for independence from Spain in October 2017. In that case, bots bombarded influential human users — both for and against independence — with inflammatory content meant to exacerbate the political divide, researchers report online November 20 in the Proceedings of the National Academy of Sciences.

These surveys help highlight the role of bots in spreading certain messages, says computer scientist Emilio Ferrara of the University of Southern California in Los Angeles and a co-author of the PNAS study. But “more work is needed to understand whether such exposures may have affected individuals’ beliefs and political views, ultimately changing their voting preferences.”

It makes me wonder, if bot’s got Trump into his current position, is there any chance they could do the world a favour and get him out? Hmm, perhaps that’s not the way forward…still, one can dream…

If you want further information on this or any other subject within the digital domain, then we'd love you to get in touch via steve@eonic.com or on 01892 534044.