The Australian Commonwealth government’s push for transparency in the use of artificial intelligence (AI) by federal agencies is facing significant challenges. A policy established last year mandates that most agencies publish “AI transparency statements” on their websites by February 2025. These statements are intended to detail how agencies utilize AI, including domains of application and safety measures. The overarching goal is to foster public trust in AI’s governmental use, without resorting to formal legislation. However, recent research indicates that many agencies are not complying with this directive.
In a study examining 224 federal agencies, only 29 were found to have easily identifiable AI transparency statements on their websites. A more comprehensive search revealed 101 links to such statements, resulting in a compliance rate of approximately 45%. Notably, specific agencies, such as defence and intelligence, are only recommended to publish these statements, suggesting that the actual compliance numbers could be even lower. These findings raise critical questions about the efficacy of Australia’s approach to AI governance in the public sector.
Significance of AI Transparency
Public trust in AI applications within Australia is already low, and the government’s hesitation to enact comprehensive legislation—identified as a gap by the Robodebt royal commission—makes transparency essential. Citizens expect their government to set a standard for responsible AI usage. Yet, the very policy designed to promote transparency appears to be overlooked by many agencies.
The lack of enforceable AI regulations at the national level could affect private sector practices as well. A recent study revealed that while 78% of corporations are aware of responsible AI practices, only 29% have implemented them. This disconnect underscores the need for proactive measures from government agencies to lead by example in the realm of AI governance.
Challenges in Accessing Transparency Statements
The transparency statement requirement is a central obligation under the Digital Transformation Agency’s policy for responsible AI use. Agencies must also designate an “accountable AI official” responsible for overseeing AI operations. Ideally, the transparency statements should be clear, consistent, and readily accessible from each agency’s homepage.
In collaboration with the Office of the Australian Information Commissioner, researchers conducted a thorough investigation to identify the presence of these statements. The methodology included automated website scanning, targeted Google searches, and manual review of federal agency lists. Despite these efforts, many statements were difficult to locate, often buried within subdomains or requiring extensive navigation.
Particularly concerning was the absence of statements for several agencies where publication is mandated. While this could be attributed to technical issues, the extensive effort required to uncover these documents suggests a failure in policy implementation.
The transparency requirement, while theoretically binding, lacks practical enforcement measures. There are currently no penalties for agencies that do not comply, nor is there a central registry to track adherence to the policy. This results in a fragmented landscape that ultimately undermines the trust the policy was intended to cultivate, leaving the public without a clear understanding of how AI impacts decisions affecting their lives.
International Comparisons and Future Directions
Globally, different countries are addressing AI transparency in varying ways. For instance, the United Kingdom has implemented a mandatory AI register; however, as highlighted by the Guardian in late 2024, many departments have failed to disclose their AI usage, despite legal obligations. Despite slight improvements this year, high-risk AI systems identified by civil society groups remain unpublished on the UK government’s register.
In contrast, the United States has adopted a more stringent approach. Federal agencies are required to assess and publicly register their AI systems. Non-compliance with these regulations results in a cessation of AI usage, reflecting a commitment to transparency and risk mitigation.
As researchers continue to delve into the content of the existing transparency statements, the focus will be on determining their effectiveness. Are these statements meaningful, or do they merely serve as formalities? Early observations suggest a wide variation in the quality of disclosures.
For governments to genuinely commit to responsible AI usage, they must enforce their policies rigorously. If researchers struggle to locate transparency statements, the concept of transparency itself becomes questionable. The authors express gratitude to Shuxuan (Annie) Luo for her contributions to this research.
José-Miguel Bello y Villarino is supported by the Australian Research Council as an EC Industry Fellow. Alexandra Sinclair and Kimberlee Weatherall are both prominent figures within the ARC Centre of Excellence on Automated Decision-Making and Society, which is also funded by the Australian Research Council.


































