That’s significant because WhatsApp is encrypted and privacy is a major element of its branding. But the Facebook workers -- based in Austin, Dublin, and Singapore -- apparently don’t review all messages, just those that other users have flagged as abusive or illegal.
According to the report, private messages, images, and videos that have been identified by other WhatsApp users as improper go through Facebook’s artificial intelligence systems. The contractors then decide if the claims are valid. Should they find evidence of child pornography or terrorist activity, ProPublica says the messages may be shared with law enforcement.
“WhatsApp is a lifeline for millions of people around the world,” Facebook said in a statement. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”
Facebook says it doesn’t ‘moderate’
Facebook also makes a point of saying it does not moderate WhatsApp content. “We actually don’t typically use the term for WhatsApp,” WhatsApp spokesman Carl Woog said, pointing out that the team’s mission is to “identify and remove the worst abusers.”
But ProPublica, a non-profit investigative journalism organization, maintains that the review “is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users.” It says the app is “far less” private than its users believe.
In 2016, two privacy groups, the Electronic Privacy Information Center and the Center for Digital Democracy, filed a complaint with the Federal Trade Commission (FTC) over claims that Facebook was mining data from WhatsApp subscribers for its digital advertising business.
WhatsApp is a free messaging app that was founded in 2009. It was acquired by Facebook in 2014 for $19 billion and now has more than 2 billion users worldwide.
Under Facebook’s ownership, WhatsApp partnered with Open Whisper System to add end-to-end encryption.