Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016. - The Washington Post

A little less than a year remains until Election Day in 2020. A crowded stage of Democratic candidates and an impeachment inquiry into President Trump have further inflamed divisive political rhetoric. Much of these conversations occur online: in tweets, Facebook posts, Instagram stories and YouTube videos. Social media users, as well as politicians, are already starting to blame “Russian bots” for promoting trending political content.

The term “bots” often refers to automated accounts that publish lots of content and infiltrate online communities to try to sway online conversations.

Of course, election-related suspicion of bots makes sense. Robert S. Mueller III’s special counsel report documented how Russia tried to stoke societal divisions and undermine the integrity of the election process in 2016. Twitter identified 50,258 Russian-linked automated accounts tweeting “election-related content” at the time. While bots are an ongoing and serious threat, especially in how they might try to influence elections, a lot has changed since 2016. Automation in 2020 has become more nuanced.

The new reality is that when it comes to trending conversations, bot-like activity is all but inevitable. Nick Monaco, research director at the Institute for the Future’s Digital Intelligence Lab (DigIntel), claims there is “never something that’s trending that’s not in some way promoted by bots.” But not all bots are disinformation-spreading political bots. They range from spam bots to product-selling bots, and even random joke-making bots.

Automated accounts and misinformation exist on all the major tech platforms, but researchers often focus on Twitter because it is the most open and provides more data to track than other platforms such as Facebook and Instagram. Twitter has also traditionally not set authenticity as a rule for accounts in the same way that Facebook has, so there is a perception that it’s easier to automate on Twitter.

Monaco says the online atmosphere is low-risk, high-reward for bot builders. “There’s just really not a lot of consequences for the time being for spreading disinformation or for taking advantage of trending topics and using bots to effect some sort of goal online, whatever that may be.”

As we approach 2020, what kind of impact will these potential bots have on manipulating public opinion? Bots are still a problem, but how big of a problem?

As Twitter and other tech companies create new policies and algorithms to identify automation, data analysts report that bots are having less of a significant impact on the conversation.

After 2016, Twitter took significant steps to fight misinformation operations. The company says it continues to invest in countering bot activity, especially ahead of the 2020 elections.

A concerted effort is underway to build technology that is resistant to manipulation, said Yoel Roth, head of site integrity at Twitter. “And we’re always trying to introduce new systems, whether it’s trends, whether it’s search, whether it’s the conversation people are having or even how people sign up for Twitter in the first place to be able to detect whether somebody is trying to do something at scale with bots or with automation.”

Twitter’s more effective detection strategies are forcing bots to get better at hiding to remain on the platform. But as detection tactics evolve, so do bot networks. Researchers refer to this dilemma as an “arms race” and warn of the more subtle manipulation threats to develop in the future.

“So the battle-space in 2020 is going to be a lot more complicated. And the hardest part of the response is going to be attributing any particular piece of activity to any particular actor,” Ben Nimmo, director of investigations at network analysis firm Graphika, told The Washington Post. “The most important thing is to isolate the behavior which is trying to distort the debate, is trying to interfere with the election, and make sure that that behavior doesn’t actually have an impact.”

Bots today have more believable online profiles, more advanced conversational skills and appear to be legitimate users embedded in human networks. Some automated accounts are also partially managed by humans — profiles known as “cyborgs” or “sock puppets.”

Data scientists also point to new, more evolved tactics such as “inorganic coordinated activity” as a more nuanced online threat. “Inorganic coordinated activity” is when a group of humans, bots or a combination of both attempts to influence the online conversation by strategically releasing premeditated messaging at a specific time. The goal is for a small number of accounts — human or automated — to appear larger on Twitter than they are in reality.

Analyzing data around trending conversations from major news moments such as the Democratic debates or the House impeachment inquiry could provide insights into what disinformation efforts will look like this time around. Here are a few examples we examined to see if and how bot or inorganic coordinated activity played a part.

Most recently, BuzzFeed reported that Twitter suspended accounts that tweeted, “I hired Donald Trump to fire people like Yovanovitch.” This was during the first week of public impeachment inquiry hearings in which former U.S. ambassador to Ukraine Marie Yovanovitch testified to Congress. A Twitter spokesperson told The Post initial investigations did not find any evidence of bot activity amplifying the phrase and the conversation was believed to be driven by organic traffic.

Monaco, along with Nate Teblunthuis and Katie Joseff at DigIntel, analyzed almost 2.9 million tweets from 667,950 users during the fourth Democratic debate on Oct. 15. Their analysis found networks of coordinated users tried to jump on the virality of the #DemDebates hashtag to amplify unrelated causes and disinformation, which they call “hashtag hijacking” or “hashtag surfing.”

They reported that one of the most interesting examples of this was a botnet promoting anti-vaccine content alongside the #DemDebates hashtag. Some 46 percent of the tweets pushing anti-vaccine disinformation came from bot accounts; 19 percent of these users average more than 100 tweets per day.

“These users are nearly all retweeting the antivax account @45HammerTime, and promoting his site maddadmaga.com, which sells antivax merchandise. They mostly retweet #VaccineIjury, a misspelled version of the known antivax hashtag #VaccineInjury,” Monaco said.

During the third Democratic presidential debate on Sept. 12, one candidate was positively discussed on social media more than any other: tech entrepreneur Andrew Yang. But the online hype around Yang contrasted with his debate performance: He spoke the least of all 10 candidates onstage that night. So how did Yang gain so much positive traction on a night when he barely spoke to the audience?

DigIntel found some bot-like activity in the 3 million tweets analyzed within 48 hours of that same debate. The data set revealed 11.6 percent of users posting the #yanggang hashtag showed signs of being bots. However, this is a typical amount of automation for any given political hashtag, according to DigIntel. Therefore, while some bot-like accounts promoted the #yangang hashtag, it appears the majority of the conversation was pushed by real Yang supporters. This may have been coordinated in some way, but there are not any major consequences for human coordination around political messaging.

Yang isn’t the only candidate to have inspired a viral hashtag after a debate. When Rep. Tulsi Gabbard (D-Hawaii) went after Sen. Kamala D. Harris (D-Calif.) during the July 31 debate, the #KamalaHarrisDestroyed hashtag prompted 150,000 tweets in the span of 24 hours, according to Graphika. Some media outlets and strategists suggested the hashtag could have been spread by bots. And Harris’s national press secretary shared a story about how Russia could be supporting Gabbard with propaganda.

Graphika dissected the first 50,000 tweets that used the #KamalaHarrisDestroyed hashtag. Despite online users accusing “Russian bots” of spreading the hashtag that night, Graphika found automated account activity was within normal range, meaning it was not enough to drive the conversation around the hashtag.

“In those first 50,000 tweets, 2,000 tweets came from just 50 accounts. Now that’s quite a high volume in terms of individual accounts posting, but that’s only about 4 percent of the traffic, and that’s a relatively low score. That’s something we’ve seen before in organic flows,” Nimmo said.

Twitter accounts that Graphika found actually drove the #KamalaHarrisDestroyed conversation were run by verified humans. Terrence K. Williams, a conservative actor and comedian with more than 610,000 followers, reportedly started the hashtag. Conservative video bloggers “Diamond and Silk” magnified the hashtag by sharing it with their 1.2 million followers.

Ben Decker, a lead analyst at Global Disinformation Index, said the most active accounts promoting the #KamalaHarrisDestroyed hashtag came from the fringes of both sides of the political spectrum. Although their politics differ, Decker said, the far left and far right often support the common goal of undermining a mainstream political candidate.

“It says something about how fringe communities with polar opposite ideologies have far more ideological adjacency,” Decker said. “There’s a lot of coming together in a non-coordinated way to tarnish someone who both feel is a threat to their own political beliefs.”

Experts say the 2020 election will remain a prime target for online interference. It is likely disinformation operations will continue to take advantage of the divisions within the United States, exacerbating political tension that already exists online. The methods for accomplishing this have not gone away; they have simply adapted.

“I suspect that we’ll see disinformation operations try and leverage existing communities to get their messages picked up. In the old days, someone could run an army of bots to try and get a hashtag to trend and have loads of people see it. These days it’s harder to run an army of bots,” Nimmo said. He suggested a new strategy would be to “try and get groups or particular influencers to amplify. . . . I suspect we will see more insidious attempts to embed content or plant phrases and hashtags and get real people to take them on.”