A comprehensive study has confirmed that while it is technically feasible to verify the ages of social media users, relying on platforms to choose their own methods could lead to inconsistent outcomes. With just over three months remaining until the Australian government implements its social media ban for children under 16, the findings from a British research firm have raised significant concerns about the viability and reliability of various age assurance technologies.
The report, published by the federal government, indicates that age can be assessed reasonably through multiple methods. Nevertheless, it stops short of identifying a singularly effective approach, highlighting the inherent risks and limitations associated with each. According to the report, “Implementation depends on the willingness of a small number of dominant tech companies to enable or share control of age assurance processes.” It emphasizes that coordination among major technology providers is crucial for any comprehensive age assurance model to succeed.
The survey, which was conducted prior to the formal announcement of the under-16 social media ban, aimed to explore age assurance more broadly rather than evaluate the policy itself. As the responsibility shifts to social media platforms to verify user ages, the government’s forthcoming regulations will outline the “reasonable steps” these platforms must undertake to comply with the ban.
While the exact standards for accuracy and privacy safeguards have yet to be detailed, the report suggests that platforms will not be mandated to use specific methods for age verification. Various approaches examined include formal verification through government-issued documents, parental approval, and emerging technologies that assess age based on facial recognition, gestures, or behaviors.
Despite the potential for these technologies, the study identified significant concerns regarding their reliability and privacy implications. Age assessment technologies were found to be less accurate for girls and non-white individuals, with an average error margin of two to three years. The reliance on government documents, such as passports or licences, raised privacy risks, as some providers were noted to retain user data unnecessarily, although these methods generally demonstrated greater accuracy. Parental controls, which are currently implemented in various forms by companies like Apple and Google, also presented a mix of privacy and accuracy challenges.
Despite these challenges, the survey identified several third-party verification providers capable of delivering effective age assurance while minimizing data retention. “This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” stated Anika Wells, the Communications Minister. She emphasized the importance of safeguarding user privacy while exploring effective solutions.
The report also addressed the fact that large social media and technology companies, including Meta, Snap, TikTok, Google, and Apple, have developed their own age assurance methods. While these platforms participated in the study to varying extents, the authors were unable to provide detailed evaluations of their proprietary systems. The report noted, “Individual services implement their own systems for account creation, age gates, content filtering, and parental features. However, these solutions often operate in isolation and are not interoperable across platforms.”
The study examined methods to prevent circumvention of age assurance processes, such as the use of virtual private networks and manipulated government documents. Although many providers are actively combating these evasion tactics, no foolproof solutions were identified. Both Anika Wells and Julie Inman-Grant, the eSafety Commissioner, acknowledged that no method would be entirely secure.
Concerns regarding the effectiveness of age verification methods have been echoed by experts in the field. Lisa Given, a computer science professor at RMIT University, expressed skepticism about the viability of the ban, suggesting that parents may face unexpected challenges. She warned of a “messy situation” where age verification tools produce false positives and negatives, with young users frequently misidentified as being older.
The report indicated that both false positive and false negative rates for age verification using official documents hover around three percent. For technologies assessing age based on facial features or other traits, a “grey zone” of two to three years was noted, with errors sometimes exceeding four years. During a National Press Club address in June, Julie Inman-Grant stated that the ban would likely incorporate multiple technologies, reiterating that no specific technology mandates would be imposed.
“The technology exists right now for these platforms to identify under-16s on their services,” she asserted. Furthermore, companies will be required to measure and report on the effectiveness of their age verification efforts, allowing for ongoing evaluation and evidence gathering in the future.
