Meta whistleblowers raise child safety alarm

Meta whistleblowers say child safety data was buried in VR. Senators and the FTC probe if Meta put kids at risk while parents seek answers.

Meta is under intense scrutiny after whistleblowers came forward with troubling claims about how the company handled child safety research in its virtual reality platforms. They allege that Meta’s legal team deleted or blocked data that showed children under 13 were at risk, even as the company promised to make child safety a priority. Meta denies the allegations, but the Federal Trade Commission (FTC) and Senate investigators are now digging into what really happened.

 

 

Mark Zuckerberg talking about Meta

Credit: Meta

 

Whistleblowers say Meta buried child safety research

Several current and former employees told Congress that Meta suppressed findings about children being exposed to predators in its VR world, Horizon Worlds. In one alarming case, a boy younger than 10 was allegedly propositioned by adults while using a headset. According to two researchers who documented the incident in Germany, their manager later ordered the recording and notes of the child’s claims to be deleted.

Whistleblowers also point to internal projects that were shut down before they could make an impact. “Project Salsa”was meant to create parent-managed accounts for tweens, but employees say it was quietly scaled back after regulators began pressing Meta on compliance with child protection laws. “Project Horton,” which had a $1 million budget to study age verification, was canceled in late 2022, just before holiday break. Researchers were told it was for budget reasons, but whistleblowers allege legal pressure played a role.

They further claim Meta’s lawyers repeatedly intervened to shape or block research on youth safety. In some cases, legal teams allegedly instructed researchers to avoid collecting data that could reveal children under 13 were active in VR. Critics say this created “plausible deniability” for Meta while leaving parents and regulators in the dark about the true risks kids were facing online.

A young person using a Meta VR headset to play games

Credit: Meta

 

Meta denies whistleblower claims on child safety in VR

For tomorrow’s Senate Judiciary subcommittee hearing titled Hidden Harms, a Meta spokesperson provided the following on-the-record statement:

“The claims at the heart of this hearing are nonsense; they’re based on selectively leaked internal documents that were picked specifically to craft a false narrative. The truth is there was never any blanket prohibition on conducting research with young people and, since the start of 2022, Meta approved nearly 180 Reality Labs-related studies on issues including youth safety and well-being.” – Meta Spokesperson

 

Background on young people in VR (Meta position)

Meta says it has a system of safeguards for pre-teens and teens using VR:

  • Account setup: Users must provide their age. Pre-teens (10-12) can only use parent-managed accounts, while those 13 and older can open their own.
  • Parental controls: Parents must approve apps for pre-teens. They can also manage settings like voice chat, personal boundaries, follower lists, app usage, and time spent in Quest and Horizon Worlds.
  • Age enforcement: If Meta finds that a child under 13 is using an account meant for older users, it requires ID verification, account conversion to a parent-managed account, or deletion.
  • Content management: Parents may give permission for pre-teens to access age-appropriate Worlds.

Meta emphasizes that its VR platforms include tools to block problematic users, limit unwanted contact, and support safer experiences.

Image of Meta Quest 2

Credit: Meta

 

Background on AI chatbots (Meta position)

Meta also addressed protections for under-18s interacting with its AI products:

  • Guardrails in place: Chatbots are designed not to engage teens on topics such as self-harm, suicide, eating disorders, or inappropriate romantic conversations. Instead, they guide users to expert resources.
  • Limited access: Teens currently have access only to a select group of AI characters focused on education, creativity, and expression.
  • Ongoing updates: Meta says it is continually refining systems to add new safeguards and ensure safe, age-appropriate experiences with AI.

 

Senate and FTC investigate Meta child safety risks

The Senate Judiciary Subcommittee is probing the whistleblower claims under the banner of “hidden harms.” FTC regulators are also investigating whether Meta violated federal child protection laws by allowing kids to use VR platforms without proper safeguards. These investigations could reshape how tech companies are held accountable for safety in immersive digital spaces.

How parents can keep kids safe in VR

If your child uses VR, you can take practical steps right now to reduce risks. Whistleblowers say Meta struggled with underage users slipping into Horizon Worlds, so these safeguards are more important than ever.

 

1) Check ages and passwords

Make sure VR headsets are set up only for age-appropriate users. Meta says Quest accounts require age verification, but children often bypass rules by using a parent’s login. Keep your headset password-protected and stored where kids can’t easily access it without your knowledge. If your child is under 13, they should not be using a standard account and if they’re between 10 and 12, Meta requires a parent-managed account.

 

2) Remove personal data online

Predators look for digital breadcrumbs to learn more about potential victims. Regularly audit your child’s online presence. Remove unnecessary personal details from social media, VR accounts, and gaming platforms. Meta says it will delete underage accounts once flagged, but parents should also take steps to scrub personal data from the wider internet using privacy tools or data removal services. Less exposure online means fewer entry points for those who want to exploit kids.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap, and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

 

Is your personal information exposed online?

Run a free scan to see if your personal info is compromised. Results arrive by email in about an hour.

 

3) Turn on parental controls

Meta provides parental supervision tools on Quest and Horizon Worlds, but they only work if you activate them. These controls let you approve the apps your child downloads, manage voice chat and personal boundaries, and see how much time they’re spending in VR. Turning these on creates a digital “seatbelt” that helps protect your child from strangers or inappropriate experiences.

 

4) Set rules about strangers

One of the most troubling whistleblower claims involved a child under 10 being propositioned by adults in Horizon Worlds. Kids may not recognize these dangers until it’s too late. Talk openly with your child about the risks of engaging with strangers online. Create clear rules: don’t accept friend requests from people you don’t know in real life, report any uncomfortable interaction, and come to you immediately if something feels wrong.

 

5) Limit usage time

VR can feel immersive and addictive, and longer sessions may increase the chance of your child encountering predators or harmful content. Experts recommend limiting VR play to short, supervised intervals. Build in regular breaks to reduce both health risks like eye strain and exposure risks tied to online strangers. A consistent time schedule also helps you stay in control of when and how your child is logging in.

 

 

What this means for you

This debate is not just about Meta. It’s about how safe our kids are in emerging technologies. Whether it’s VR, social media, or AI chatbots, parents can’t always rely on companies to put safety first. Whistleblower claims highlight the need for families to stay proactive with privacy tools and to have ongoing conversations with their kids about online dangers.

 

Related Links: 

 

Kurt’s key takeaways

Meta’s whistleblower revelations raise serious questions about whether profits were placed above protection. While Meta insists it is making progress with safety features and parental tools, lawmakers are demanding clearer answers. For parents, this moment is a reminder to stay alert and to take practical steps that keep your children safe.

Do you believe big tech companies like Meta can truly prioritize child safety, or is stronger government action the only way forward? Let us know in the comments below. 

FOR MORE OF MY TECH TIPS & SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE

 

 

Copyright 2025 CyberGuy.com.  All rights reserved.  CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow