PhotoYesterday, Twitter took another step in its campaign to crack down on threatening or abusive content on its platform, by updating its policy regarding violent threats.

The original policy banned “direct, specific threats of violence against others.” The new policy removes the first two words, and now prohibits “threats of violence against others.” This added vagueness is intended to give Twitter's moderators more leeway to decide what constitutes a “threat.” Under the old “direct, specific” policy, trolls and abusers could, for example, wish for threats against people, which technically was not prohibited.

Last February, Twitter's CEO Dick Costolo admitted in an internal email (later leaked to outside media) that “We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. ... We lose core user after core user by not addressing simple trolling issues that they face every day.”

In March, the company announced that it would finally crack down on “revenge porn,” the practice of publishing nude or sexually explicit photos of people (usually women) without their permission. At the time, Twitter updated its “Content boundaries” to say “You may not post intimate photos or videos that were taken or distributed without the subject's consent.”

Frozen out

But this time, Twitter has done more than change its posted policies; it's also changing its responses toward the writers of harassing tweets. Now, when an account is reported for suspected abuse, Twitter reserves the right to “freeze” that account, to require abusers to delete problematic tweets and also to require a valid phone number in order to reinstate their account.

(As a Washington Post blogger put it, “Essentially, Twitter is putting users in time-out and making it easier to identify them down the line.”)

In a company blog post discussing the new policies, Twitter's Director of Product Management, Shreyas Doshi, said that in addition to the policy changes,

[W]e have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content.

In other words, Twitter has a new algorithm which will hopefully prevent abusive tweets from being seen in the first place; if a troll knows his intended victim won't see his threatening tweets he'll hopefully lose interest in sending them. This algorithm won't prevent you from seeing the tweets of people you've chosen to follow, but it will prevent (or at least reduce the frequency of) any random troll's threatening comment from appearing on your own feed.

Like all technologies, Twitter's new policies and features are a work-in-progress; Doshi's blog post ended with the observation that “as the ultimate goal is to ensure that Twitter is a safe place for the widest possible range of perspectives, we will continue to evaluate and update our approach in this critical arena.”

Share your Comments