ChannelLife Australia - Industry insider news for technology resellers
Nam lam

AI agents are joining the public service - who's governing them?

Tue, 10th Mar 2026

Australian government agencies are under real pressure to do more with less. AI agents promise exactly that: automated workflows, faster service delivery and decisions made at a speed and scale no human team can match. It is no surprise that departments across federal and state government are already running or piloting AI-driven tools to handle everything from citizen enquiries to procurement approvals.

The debate has moved on. It is no longer about whether government agencies should adopt AI, but how quickly they can scale it.

That urgency is reflected in the Digital Transformation Agency's updated Policy for the responsible use of AI in government, which took effect on 15 December last year and reinforces the expectation that AI must be deployed with accountability and oversight.

Yet policy intent and operational reality do not always align. Implementation depends on how agencies interpret those requirements and translate them into operational controls. Recent reporting has highlighted that some government entities have failed to report cyber incidents to the Australian Signals Directorate as required. This illustrates a broader challenge; governance frameworks can exist on paper, while execution lags in practice.

In my conversations with Australian organisations, many are still relying heavily on people and process to govern AI, through review committees, risk assessments and AI governance boards. Those structures are important, but they are periodic and deliberative by design. They are not built to detect and respond to autonomous activity in real time.

AI agents operate continuously. They do not pause for governance cycles.

The scale of adoption reinforces why this matters. SailPoint's Horizons of Identity Security 2024-2025 report indicates that 82% of organisations are using AI agents in some capacity. More than half acknowledge these agents access sensitive information, often daily. At the same time, 80% report unintended actions, including accessing or sharing data in ways that were not expected.

In the private sector, this creates operational and reputational risk. In government, the stakes are higher still.

A different kind of workforce problem

Human employees work within defined boundaries. They log in, complete tasks within a defined role and log off.

AI agents however introduce a challenge that traditional identity management frameworks were not designed to handle. An AI agent is goal-oriented. It pursues its objective by traversing systems, calling APIs and retrieving data at speed. If it has been given excessive access, or if its scope is poorly defined, it will use that access.

The analogy I often use is a capable contractor handed a master key on day one. Their intentions may be sound, but the access is too broad, and no one is watching which doors are opened.

The governance gap

What makes this particularly acute in government is the nature of the data involved. Agencies manage citizen records, health information, law enforcement data, tax files and social services information. A single agent operating beyond its intended scope can expose records across systems and trigger obligations under the Privacy Act, the Notifiable Data Breaches scheme and Essential Eight requirements.

Despite this, identity security maturity is lagging. While 92% of organisations acknowledge AI agent governance is critical, and 72% believe AI agents pose a greater risk than traditional machine identities, fewer than half have formal policies in place.

This is because in many cases, AI agents are not being deployed by security teams. They are introduced by innovation teams, business units and product owners focused on delivering outcomes quickly. The pressure to demonstrate value is immediate and identity security is often treated as a secondary step. By the time a formal identity review takes place, the agent is already integrated into core systems and connected to multiple data sources.

This should concern government CIOs and CISOs. The dynamic that once drove SaaS sprawl is now emerging with AI agents. The difference is that these agents are autonomous actors operating at scale.

Treat agents like identities, not tools

The most practical shift organisations can make is to stop treating AI agents as software deployments and start governing them as identities. Every agent should be discovered and catalogued. Every agent should have a named human owner accountable for its purpose and access. Permissions should default to least privilege and be reviewed regularly as roles evolve.

Agencies need clear visibility into every AI agent operating across their environment, including those embedded in platforms such as Microsoft 365 Copilot, Salesforce or ServiceNow. Without a complete inventory, governance is guesswork. From there, the fundamentals apply: formal onboarding, explicit approval for access, periodic reviews and the ability to audit precisely what each agent accessed and when. In government, that audit trail is not optional.

Australia's Essential Eight, the Privacy Act's notifiable data breach regime and emerging AI governance frameworks all reinforce the same principle: accountability requires traceability. An AI agent that cannot be audited cannot be defended.

The opportunity

This is not an argument against AI in government. The productivity gains are real. Automating high-volume tasks allows public servants to focus on work that requires judgement and empathy. Faster, more consistent service delivery can strengthen public confidence.

But those gains are sustainable only when governance keeps pace with deployment. Agencies that anchor AI adoption in identity-first security will move faster in the long term. They will avoid the reputational damage and operational disruption that follow preventable access failures.

Australia has an opportunity to lead not only in deploying AI, but in demonstrating responsible deployment at scale. It begins with a clear standard: every AI agent that touches government data should be governed with the same rigour as every human who does.