Follow us:
  1. Home
  2. News
  3. Tech News

Researchers are developing AI to stop online trolling

A team from Caltech wants to make it easier for companies to spot online harassment

Photo
Photo (c) porcorex - Getty Images
If you’ve been on the internet for more than five minutes, you’ve likely had some exposure to online trolling -- a practice in which a person purposely tries to upset others to cause frustration and anger. 

While it might seem harmless at first glance, online trolling can be used by malicious individuals to target certain groups of people for harassment. Now, a team at Caltech wants to make it easier for online platforms to spot these individuals so that action can be taken. How do they plan to accomplish that feat? Machine learning and artificial intelligence.

“The field of AI research is becoming more inclusive, but there are always people who resist change,” said researcher Anima Anandkumar, who has been a victim of online harassment in the past. “It was an eye-opening experience about just how ugly trolling can get. Hopefully, the tools we’re developing now will help fight all kinds of harassment in the future.”

Spotting and analyzing key words

Up to this point, preventing online harassment and trolling has been a difficult task. In most cases, automated systems look for negative online posts or keywords and flag them to be either be handled by human moderators or dealt with automatically. 

To improve that process, the Caltech team used a model called GloVe -- which stands for Global Vectors for Word Representation. The system also relies on keywords, but it takes the extra step of analyzing how they were used. 

For example, the researchers say certain words like “female” are often used on posts describing the #MeToo movement, which focuses on sexual harassment and sexual assault. While the word can appear in positive posts on online platforms, it can also be used in darker corners of the internet among misogynistic groups. 

The researchers say that GloVe was able to separate cases when “female” was being used alongside other keywords on a Reddit forum dedicated to misogyny from positive Twitter posts discussing the movement.

With time, the team hopes that this machine learning can evolve to help online platforms spot negative cases more quickly.

Take an Identity Theft Quiz

Get matched with an Accredited Partner

    Share your comments