Twitter has announced that it will automatically demote reply posts that are likely to distort users’ conversations. The company says it will do so by organizing conversations based on thousands of “behavioral signals.”
Troll-like replies in “communal areas” of the platform will be pushed to the bottom, as will posts from users repeatedly tweeting at accounts that do not follow them. Users will have to click the "show more Tweets" button to see tweets that were made less visible.
“The result is that people contributing to the healthy conversation will be more visible in conversations and search,” the company said in a blog post.
To weed out unhealthy contributions to Twitter conversations, the platform’s algorithm and human reviewers will look for certain signals, including how often a user is blocked by people they interact with, whether they have created multiple accounts from a single IP address, and whether the account is closely related to others that have violated the company’s terms of service.
“We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other,” the company added.
Improving the Twitter experience
The move is part of the company’s push to create healthier conversations on Twitter -- a goal first announced by CEO Jack Dorsey in March. At the time, Dorsey admitted that the company hadn’t done enough to address hate speech and abuse on the platform.
“We didn’t fully predict or understand the real-world negative consequences,” he said.
Now the company is taking steps to mute posts that don’t facilitate healthy conversation. Although some of the tweets that Twitter plans to demote don’t outright violate the site’s policies, many of them have a negative impact on other users’ experience.
“What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search,” the company explained.
“Less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large – and negative – impact on people’s experience on Twitter,” the company continued.
Twitter said it tested the changes in select markets and saw a noticeable drop in abuse problems. Abuse reports on conversations dropped by 8 percent, while abuse reports in search dropped by 4 percent.