← Back to Library

How Facebook helps predators find each other

Programming note: To accommodate some news, the next edition of Platformer will arrive Wednesday morning, instead of at 5PM PT Tuesday as usual. 

 In this photo illustration, a Facebook logo is displayed on a smartphone with stock market percentages in the background. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)
(Omar Marques / Getty Images)

In 2019, YouTube realized that it had a problem. Parents who had uploaded seemingly innocuous footage of their children playing in swimsuits were surprised to find that some of the videos were getting hundreds of thousands of views.

It turned out that the company’s recommendation algorithms had unwittingly created a catalog of videos of young children in various states of undress and were serving them up to an audience of pedophiles. “YouTube never set out to serve users with sexual interests in children,” Max Fisher and Amanda Taub wrote in the New York Times, “but in the end… its automated system managed to keep them watching with recommendations that he called “disturbingly on point.”

I thought about YouTube’s predatory playlists over the weekend while reading how Meta’s systems have been discovered to operate in a similar way. In a new investigation, the Wall Street Journal examined how automated systems in Facebook and Instagram continuously recommend content related to pedophilia and child sexual abuse material. 

Here are Jeff Horwitz and Katherine Blunt:

The company has taken down hashtags related to pedophilia, but its systems sometimes recommend new ones with minor variations. Even when Meta is alerted to problem accounts and user groups, it has been spotty in removing them.

During the past five months, for Journal test accounts that viewed public Facebook groups containing disturbing discussions about children, Facebook’s algorithms recommended other groups with names such as “Little Girls,” “Beautiful Boys” and “Young Teens Only.” Users in those groups discuss children in a sexual context, post links to content purported to be about abuse and organize private chats, often via Meta’s own Messenger and WhatsApp platforms. 

The Journal’s report follows an earlier investigation in June that documented how Instagram is used to connect buyers and sellers of CSAM. That report found that viewing even a single account in a criminal network was enough to get “suggested for you” recommendations for buyers and sellers on the service, and that “following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”

A follow-up report in September by the Stanford Internet Observatory, which aided in the Journal’s investigations, found that Meta had

...
Read full article on Platformer →