Australia has announced a world-first social media ban aimed at protecting children under the age of 16 from online dangers. However, experts are warning that this initiative will not serve as a comprehensive solution for child safety online. The age-assurance technologies, which were recently approved, have raised concerns regarding their effectiveness and potential shortcomings.
A trial conducted by the Age Check Certification Scheme evaluated over 60 tools designed to verify users’ ages. These methods included matching individuals with documentation and estimating age based on physical characteristics. The results indicated that while technology could potentially be utilized “privately, efficiently and effectively” to restrict access to explicit content, it is not foolproof.
Faith Gordon, an associate professor of law at the Australian National University, expressed skepticism about the reliability of age-assurance technology. “There’s going to be groups of young people that will still get around this,” she stated. “I don’t think it’s a watertight solution at all. Age-assurance technology is clearly not the ‘silver bullet’ to make the digital world safer for children.”
The social media ban mandates that platforms must take “reasonable steps” to enforce age restrictions, yet it does not prescribe specific methods for doing so. Experts from the Age Check Certification Scheme noted that the systems tested were “generally secure and consistent with information security standards.” Despite this, they cautioned that the rapidly evolving online threat landscape means these systems cannot be considered infallible.
Facial-recognition technology, a common tool for age verification, has also been criticized for bias. Gordon pointed out that it often misidentifies individuals who do not fit certain demographic profiles. Additionally, there is a significant drop in accuracy for individuals close to the cut-off age, leading to nearly one-tenth of 16-year-olds being wrongly denied access.
While the ban seeks to prevent children from creating accounts on platforms like Facebook, Instagram, TikTok, and others, it does not eliminate the risk of underage users accessing these services. Children could still be groomed through other online channels or gaming platforms such as Fortnite, where they might encounter predatory behavior.
The report from the Age Check Certification Scheme raised concerns about unnecessary data retention, suggesting that tech companies could be over-preparing for future regulations. This could lead to increased risks of privacy breaches due to the excessive collection and retention of personal data.
The Greens party has called on the government to reconsider its approach to age-verification technology. Senator David Shoebridge stated, “The age-assurance trial findings accidentally prove the social media age ban is unworkable and it is time to rethink this flawed approach.”
The impending social media ban, which will take effect in December 2023, was announced by the federal government as a measure to safeguard young Australians. Communications Minister Anika Wells highlighted that the findings from the trial demonstrate that effective methods exist for enforcing age limits on social media platforms. Companies that fail to comply could face fines of up to $49.5 million.
“This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Wells remarked. “While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and importantly, that user privacy can be safeguarded.”
As Australia moves forward with its social media ban, the effectiveness of age-assurance technologies remains a topic of intense debate among experts, policymakers, and stakeholders. The potential risks and limitations of these systems underscore the ongoing challenge of ensuring a safe online environment for children.
