PhotoA new study conducted by researchers from the University of Southern California found that experts could have even more difficulties detecting artificial intelligence (AI) in upcoming elections because advances in technology have made these fake accounts much more human-like. 

For the study, the researchers compared and contrasted differences between bots during the 2016 elections versus the 2018 elections. They found that bots are copying human users’ actions on social media. 

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms,” said Ferrara. “As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content.” 

Bots becoming more human

The researchers sought to understand how bot technology evolved over time by analyzing nearly 250,000 social media users who used their accounts to discuss anything election-related in both 2016 and 2018.

They learned that 30,000 of the 250,000 users were bots posing as humans, and their social media game certainly got more advanced over time. The researchers discovered that the bots mimicked human behavior in both election years by adapting their communication styles to appear more human-like to other online users. 

During the 2016 elections, retweeting was popular among both bots and humans, as the goal was to bring a lot of attention to one specific idea. However, by 2018, trends in social media had changed, and users were less into retweeting and had opted to share their own ideas on their accounts -- bots included. 

The researchers hypothesized that these fake accounts were working to appear more reputable to other social media users by doing everything possible not to raise suspicions about the legitimacy of their accounts. 

During the 2018 election, bots had started creating polls on Twitter and were more likely to engage with other users in replies/mentions to establish what appeared to be their own unique voices and opinions. 

Moving forward, the researchers hope that work can be done to better detect which accounts are fake and which are real to ensure that humans are only interacting with other humans on social media. 

“We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected,” said Ferrara. “With the upcoming 2020 U.S. elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influence.” 

Twitter doing its part

Earlier this year, Twitter unveiled its latest initiative that would help protect users against spam. 

In an effort to weed out spam accounts, the social media platform made it harder for users to create new accounts in 2018. More recently, it limited the number of accounts users can follow in a 24-hour period. Previously, users could follow up to 1,000 new accounts per day, but Twitter cut that number back to 400. 

“Follow, unfollow, follow, unfollow. Who does that? Spammers. So we’re changing the number of accounts you can follow each day from 1,000 to 400. Don’t worry, you’ll be just fine,” Twitter explained. 

Need to protect your car?

Our auto warranty matching tool will find the right warranty for you.


    Share your Comments