Australia’s decision to ban under-16s from social media has been met with widespread praise globally, with countries like the United Kingdom, Ireland, Singapore, and Japan reportedly considering similar legislation. This groundbreaking move aims to protect young people from the potential harms of social media by restricting their access. The law, passed in November 2024, is scheduled to take effect in December 2025. It mandates social media platforms to restrict users under 16 years old but specifically bans the use of official IDs such as passports for age verification and prohibits tracking Australians. However, it does not clarify what alternative methods should be used instead.
To address this gap, the Australian government has commissioned a trial testing currently available technologies designed to “assure” users’ age online. This trial is managed by the Age Check Certification Scheme, a UK-based organization specializing in verifying and certifying identity verification systems. The trial is in its final stages, with results expected by the end of June 2025. The trial aims to identify technologies that could enforce the under-16s social media ban without relying on government-issued ID documents.
Age Verification and Age Assurance: Understanding the Difference
Age verification traditionally involves confirming a user’s exact age through official documents such as government IDs. Age assurance, however, is a broader concept that includes estimating a user’s age based on other factors, such as facial recognition or metadata analysis. These techniques attempt to determine if a user meets the minimum age requirement without necessarily identifying their precise age.
In 2023, the Australian federal government declined to mandate age verification technology for adult content sites, citing concerns about the immaturity and limitations of current systems. Database checks were found to be costly, and alternatives like credit card verification were easily bypassed by minors. Additionally, Digital Rights Watch, a nonprofit advocacy group, pointed out that many verification methods can be circumvented through simple tools like virtual private networks (VPNs), which mask a user’s location.
Age assurance technologies also face significant challenges. For example, a 2023 report by the US National Academies of Sciences found that facial recognition systems often misidentify children because their facial features are still developing. Improving these systems would require collecting large datasets of children’s facial images, which raises serious privacy and ethical concerns protected under international human rights law.
Innovative Tech Trial Under Scrutiny for Flaws
The current trial includes 53 vendors competing to provide innovative solutions for age assurance. These technologies range from facial recognition systems offering “selfie-based age checks” to hand movement recognition technology that claims to calculate age ranges. Some vendors are experimenting with blockchain to securely store sensitive data.
Despite the variety of technologies, there are internal tensions regarding the trial’s design. Critics highlight a lack of attention to how these systems might be bypassed, the privacy risks involved, and skepticism about vendors’ claims of effectiveness. While it is encouraging to see innovation, many companies involved, such as IDVerse, AgeCheck, and Yoti, are relatively small startups without the influence or infrastructure to compel major tech platforms to implement these systems.
This disconnect exposes a key problem: the companies building verification tools are not the ones required to integrate them into dominant platforms like Meta, Google, and Snap. Without active participation from these tech giants, enforcing the social media ban effectively is unlikely.
Lack of Engagement from Major Tech Companies
Several leading technology companies have shown minimal interest in participating in the Australian trial. For example, minutes from the trial’s March 2025 advisory board meeting revealed that Apple has been “unresponsive, despite multiple outreach attempts.” Apple recently announced plans for an age declaration system, which would transmit an age range to developers upon request. On Apple devices, the default age assurance for kids’ accounts would be under 13, placing the responsibility on parents to adjust ages and on developers and governments to enforce the appropriate age settings.
Google’s recent proposal to use Google Wallet for age verification also raises privacy and practicality concerns. The plan would require users over 16 to upload government-issued IDs linked to their Google account and trust Google not to track their online activity—a privacy promise that remains unproven. Moreover, Meta’s platforms like Facebook and Instagram do not allow login with Google credentials, complicating any cross-platform age assurance efforts.
Interestingly, Google is promoting AI chatbots as social platforms accessible to children under 13, effectively creating a “social network of one” that falls outside the scope of the social media ban.
Rather than fully engage with Australia’s age verification systems, companies like Apple and Google appear to be pushing their own solutions, seemingly prioritizing user retention and shifting responsibility away from themselves.
Meta’s stance fits this pattern of resistance. In January 2025, Mark Zuckerberg publicly vowed to oppose international regulations that threaten Meta’s business model, signaling likely pushback against enforcement of the under-16s ban.
A Paradigm Shift in Internet Regulation
Australia’s approach to banning under-16s from social media represents a significant shift in internet regulation. Instead of focusing on age-gating specific content types like pornography or gambling, Australia targets the very platforms serving as communication infrastructure for billions globally. This approach reframes the issue as one centered on protecting children as children, rather than merely regulating content or business practices.
However, this raises complex questions about children’s digital rights. By limiting access to social media, Australia risks restricting children’s ability to access information and express themselves online. Critics argue the legislation prioritizes protection over participation, potentially isolating vulnerable children from essential digital spaces that foster learning, social interaction, and success.
The use of experimental age assurance technologies to enforce this ban adds another layer of complexity. The balance between safeguarding privacy, respecting children’s rights, and protecting them from harm is delicate and fraught with challenges.
As the ban’s implementation date approaches, it will be critical to monitor how these technologies perform and whether this regulatory approach effectively shields young Australians while respecting their rights and realities.
Australia’s social media ban for under-16s may offer some respite to concerned parents by restricting access to platforms they fear. Yet, paradoxically, many adults continue to rely on the same platforms daily for basic communication, illustrating the complex role social media plays in modern life.