Nationwide Social Media Ban for Under-16s Raises More Questions Than Answers
The Australian government is preparing to launch a nationwide social media ban for users under 16, set to begin in December 2025. Backed by claims that the technology is “private, robust and effective,” early findings from an independent trial are now being used to justify the plan. But beneath the polished language lies a series of unanswered questions and serious technological concerns.
Australia Tests High-Tech Age Assurance Tools
The trial was commissioned to assess the viability of alternative age-assurance tools, beyond just government-issued IDs. Overseen by the UK-based Age Check Certification Scheme, the trial brought in 53 vendors using various tech—from facial recognition to hand and voice movement analysis—to estimate users’ ages. According to the project director, “there are no significant technological barriers,” and the tools can supposedly support both user privacy and child safety.
Public Backs Ban, But Privacy Fears Persist
Surveys suggest 90% of Australians support the concept of banning underage social media usage, but nearly 80% express concerns about privacy and data security. Half also worry about the accuracy of age estimation and lack of clear oversight mechanisms. This disconnect between support and skepticism shows Australians are not convinced about the execution of the ban.
Facial Recognition Fails to Prove Reliable
The trial claims that age checks “can be done in Australia,” but real-world tests tell another story. The ABC revealed that some face-scanning tools used in the trial misidentified 15-year-olds as people in their 20s or 30s. These tools could only estimate age within an 18-month range 85% of the time. That means a 14-year-old could pass as 16.5, gaining access, while 17-year-olds might be mistakenly blocked.
Global Data Shows Systemic Bias in Age Estimation
These high error rates aren’t isolated. Global research from the U.S. National Institute of Standards and Technology confirms that even the best systems—like Yoti—struggle with accuracy, especially among women and people with darker skin. While Yoti’s average age estimation error is 1 year, other systems average errors as high as 3.1 years. That margin of error makes enforcement of the under-16 rule highly unreliable.

New Methods, New Risks for Teen Privacy
The government has insisted on “alternative age assurance methods” to avoid sole dependence on ID, knowing many teenagers don’t have one. But even biometric and behavioral methods like voice recognition or hand-movement tracking introduce new risks. These approaches not only have technical flaws, but also raise serious privacy and consent issues for minors.
Debating a Higher Age Threshold
The idea of setting higher age thresholds—similar to how retail alcohol sellers check IDs for those who appear under 25—is under discussion. This could reduce the chance of underage access but would increase the odds of wrongfully excluding eligible users. The government now faces a dilemma: allow some mistakes to slip through or risk over-policing the internet.
Lack of Transparency Undermines Trust
A major issue remains the lack of transparency. The trial hasn’t clarified what platforms will actually use, how disputes will be handled, or what rights users have when they are wrongly flagged. Can parents appeal if their child gets in? What about adults falsely blocked? These are not minor concerns.
Loopholes Could Undermine the Entire Ban
What’s also unclear is how the government will prevent workarounds. Will it become standard to verify age regularly? Can kids simply ask older siblings or friends to create accounts on their behalf? Without strong preventive measures, these loopholes could make the entire ban ineffective.
Legal Vagueness Leaves Platforms in a Grey Zone
The legislation requires companies to take “reasonable steps” to restrict access, but that term remains vague. Until a firm definition and enforcement model emerge, platforms may operate in grey areas. Whether “reasonable” includes using unreliable tech is yet to be tested legally or publicly.
A Nation Watching and Waiting
Despite these gaps, the government appears committed to moving forward. But Australians—especially parents, teens, and privacy advocates—will be watching closely. With less than six months before rollout, many feel they are still in the dark about how it all will work.
Ethics and Accuracy Must Lead the Way
The challenge ahead isn’t just technical—it’s ethical, social, and deeply human. If the goal is truly to protect kids online, then the tools used must be accurate, fair, and transparent. Until then, optimism should be tempered with caution.