Australia launches new AI safety standard with mandatory guardrails
The new voluntary AI Safety Standard has been released by the Department of Industry, Science and Resources, accompanied by a proposal encompassing ten mandatory guardrails targeting high-risk AI applications.
The standard has been designed following extensive consultations, incorporating input from a broad range of stakeholders, including public advocacy groups, academic institutions, industry representatives, legal firms and government agencies. This collaborative process aims to ensure that the standard will evolve alongside the rapidly changing AI landscape.
Alongside the release of the voluntary standard, a consultation paper has been proposed for the introduction of mandatory guardrails in high-risk AI settings such as law enforcement, critical infrastructure and employment. These measures aim to provide additional protection and oversight where the deployment of AI could significantly impact public safety and welfare.
KPMG Australia, which has actively participated in previous public consultations on AI policy development, expressed strong support for the new standards. KPMG was notably involved in the Department of Industry, Science and Resources' Safe and Responsible AI in Australia discussion paper in August 2023, as well as several other policy discussions underlining their longstanding commitment to AI regulation and ethical practices.
John Munnelly, KPMG Australia's Chief Digital Officer, commented on the importance of these new guidelines. "We welcome the Australian government's new voluntary AI Safety Standard as an important step in building safe and ethical AI practices," he stated. "The Standard will strengthen protections that promote safety in the deployment and use of AI whilst also promoting innovation. It is encouraging to see that the Standard is in alignment with international regulation and best practice."
Munnelly also praised the practicality of the new Standard, noting, "We appreciate the practical nature of the Standard which is grounded in examples of how to apply them to AI use cases. This is something we have implemented with KPMG's Trusted AI approach, one of three guided principles we use to take a human-centred approach." He further welcomed the consultation for mandatory guardrails, underscoring the importance of establishing these measures in high-risk areas.
Additionally, Munnelly highlighted KPMG Australia's commitment to the newly established guidelines, stating, "KPMG Australia is committed to evaluating how we will implement the Standard and the extent to which our existing systems and processes are already aligned." He affirmed that such standards are crucial to Australia's progress as a high-tech, innovation-driven economy, facilitating the development of safe and reliable AI technology that benefits Australians and can be exported to global markets.