Meta expands safety features to protect children on Instagram
Meta has introduced a range of new safety tools on Instagram aimed at better protecting children—especially those under 13—and teen users:
-
Adult‑managed child accounts (e.g., parents/talent managers posting kids’ content) will now be placed under Instagram’s stricter message settings, with Hidden Words filters enabled to block offensive comments. These accounts will no longer be recommended to suspicious adults and their comments and posts will be hidden from potentially predatory users .
-
For Teen Accounts, new DM safety features have been rolled out: users now see the month and year the other account joined, quick access to safety tips, and a new combined “block & report” button at the top of chat threads .
-
Other updates include improved Location Notice alerts to flag potential sextortion (e.g. when the user might be in another country), and data shows teens engaged with these warnings—blocking ~1 million accounts and reporting another ~1 million after seeing a safety notice in June alone .
-
Meta’s nudity protection, which automatically blurs suspected nude images in DMs and is enabled by default, remains widely adopted—99% of teens keep it on, reducing exposure to unwanted content .
-
Enforcement is ramping up: earlier this year, Meta removed nearly 135,000 Instagram accounts for posting sexualized comments or requesting images from adult-managed child accounts, plus an additional ~500,000 linked accounts across Instagram and Facebook .
-
These changes are being introduced as broader teen safety features, initially launched with Teen Accounts, are now being expanded to adult‑managed child profiles to close gaps in protections .