← Back to Library

Meta seeks to hide harms from teens

Screenshots of what teens will see as Meta rolls out changes to Instagram, inviting them to see less sensitive content. (Meta)
What teens will see as Meta rolls out changes to Instagram. (Meta)

Today, for a change of pace, let’s talk about something other than the fate of Substack and all those who dwell upon it. 

Instead, let’s look at the mounting pressure on social networks to make their apps safer for young people, and the degree to which both platforms and regulators continue to talk past one another.

Let’s start with the news. Today, Meta said it would take additional steps to prevent users under the age of 18 from seeing content involving self-harm, graphic violence and eating disorders, among other harms.

Here’s Julie Jargon at the Wall Street Journal:

Teen accounts — that is, accounts of under-18 users, based on the birth date entered during sign-up — will automatically be placed into the most restrictive content settings. Teens under 16 won’t be shown sexually explicit content. On Instagram, this is called Sensitive Content Control, while on Facebook, it is known as Reduce. Previously, teens could choose less stringent settings. Teen users can’t opt out of these new settings.

The new restricted status of teen accounts means teens won’t be able to see or search for harmful content, even if it is shared by a friend or someone they follow. For example, if a teen’s friend had been posting about dieting, those posts will no longer be visible to the teen. However, teens might still see content related to a friend’s recovery from an eating disorder.

The changes announced today fall broadly into a category of platform tweaks that could be labeled “They weren’t doing that already?” But the fact that Meta made these changes, which the Journal calls “the biggest change the tech giant has made to ensure younger users have a more age-appropriate experience on its social-media sites,” underscores the degree to which child safety has become the most important dimension along which social networks are being judged in 2024.

How did that become the case?

The most consequential shift in attempts to regulate social media over the past year has been in the theory of harm. In the period after the 2016 US presidential election, regulators focused heavily on issues related to speech. Democrats focused on the way platforms can amplify lawful but harmful material, including hate speech and misinformation. Republicans took the opposite position, protesting against platforms’ right to moderate content and calling for an

...
Read full article on Platformer →