ChannelLife Australia - Industry insider news for technology resellers
Ash

Why runtime identity is emerging as the next cybersecurity imperative

Tue, 21st Apr 2026 (Today)

Organisations are rapidly adopting artificial intelligence agents, drawn by the promise of new revenue streams, faster customer engagement, and significant productivity gains.

From customer service chatbots to internal copilots, these systems are rapidly becoming embedded in day-to-day operations. However, as organisations enter this agentic future, a critical question is surfacing: how do you control something that doesn't behave predictably?

Unlike traditional software, AI agents are non-deterministic. They can make decisions, adapt their behaviour, and pursue goals in ways that are not always foreseeable. This flexibility is what makes them powerful, but it is also what makes them risky.

Without the right controls, an agent may take shortcuts that bypass established safeguards, exposing organisations to operational and security failures. The challenge is not simply one of access, but of identity, because access grants permission; it does not enforce control.

From credentials to context

For decades, identity and access management have relied on relatively static concepts. Users are authenticated, granted permissions, and allowed to operate within defined boundaries. These controls are typically established at a single point in time, often when a user is onboarded or a role is assigned.

That model is not sufficient on its own in an agent-driven environment. If an AI agent is granted the same standing credentials as the human it represents, it inherits not only authority but also the potential to misuse it. Standing tokens create standing privilege, and that privilege often outlives the conditions that made it appropriate. 

A customer service agent, for instance, might determine that the fastest way to update a user's details is to delete and recreate an account. While this is technically effective, it could be operationally disastrous.

The issue is not malicious intent but rather a lack of contextual judgement. Static permissions alone cannot anticipate the fluid, real-time decisions agents might make.

Treating agents as first-class identities

This requires evolving how these principles are applied, as organisations must treat AI agents as identities in their own right.

Just as human users are registered, authenticated, and governed, agents must be explicitly identified within enterprise systems.

This ensures visibility over what they are, what they can access, and how they behave. Without this, agents become invisible actors operating with borrowed authority.

Organisations must move beyond relying solely on the traditional model of extending human credentials to machines through impersonation. Instead, they must enforce explicit delegation, defining precisely what an agent can do, when it can do it, and under what constraints.

However, simply assigning identities is not enough. Agents do not act independently but rather operate on behalf of others. This introduces a second layer of complexity: relationships.

An agent is, by definition, a delegate. It carries out tasks for a human or another system, often with a subset of their permissions. Understanding and enforcing this "on behalf of" relationship is critical. It determines not only what an agent can do, but also when it must defer to human oversight.

In high-risk scenarios, this may require a human-in-the-loop approach, where certain actions trigger approvals before execution. This mirrors real-world delegation, where authority is rarely absolute.

A growing ecosystem of agents

Complicating matters further is the diversity of agents now entering enterprise environments.

One example is customer service agents that are evolving beyond simple chat interfaces. To deliver meaningful outcomes, they need access to sensitive data such as customer profiles, transaction histories and account settings. This raises the stakes for ensuring they are not over-privileged.

Internally, employee digital assistants are becoming indispensable tools. They query systems, synthesise information, and automate workflows, effectively acting as digital coworkers. Yet their deep integration with enterprise systems makes them a potential point of vulnerability if not properly governed.

Beyond organisational boundaries, customer personal agents are also gaining traction. Consumers are increasingly using personal AI assistants to interact with businesses on their behalf. These external agents introduce a new layer of complexity, as organisations must accommodate systems they do not own or control.

Each of these agent types has distinct requirements, risk profiles, and trust boundaries. As a result, a one-size-fits-all approach to security is no longer viable.

Securing the agentic enterprise

As AI agents scale in both speed and volume, the need for robust identity frameworks becomes urgent. Organisations can no longer rely on legacy approaches alone and must extend them to address how agents operate.

Instead, they must adopt a model that recognises agents as dynamic participants in the enterprise ecosystem. This builds on established identity and zero trust principles, extending them to meet the demands of continuously operating AI systems. This means establishing clear identities, defining delegation relationships and enforcing controls at runtime.

Failure to do so risks more than isolated security incidents. It threatens the trust that underpins digital interactions, both internally and with customers.

At the same time, the opportunity is significant. Businesses that successfully navigate these challenges will be better positioned to unlock the full potential of AI agents, delivering faster services, more personalised experiences and new forms of value creation.

The agentic era is not a distant prospect but is already taking shape. The organisations that thrive will be those that understand a simple but profound shift.

In a world of autonomous systems, the login is no longer the security boundary; the decision itself becomes the control point. Identity must live where action happens, continuously evaluated and enforced at the exact moment of execution.