AI forces Australian firms to make privacy a priority
Technology and security leaders are urging organisations in Australia to treat data privacy as a continuous business priority as artificial intelligence becomes more deeply embedded in enterprise workflows.
Commentary from executives at collaboration software company Smartsheet and cybersecurity vendor Proofpoint highlights growing concern over how customer data is used to train AI systems, and how quickly autonomous tools may change the risk profile for sensitive information.
The warnings come as local and global regulators sharpen their focus on AI governance, data handling and vendor accountability under emerging privacy and security frameworks.
Year-round priority
Smartsheet Chief Information Officer and Chief Information Security Officer Ravi Soin said organisations should not treat privacy as a once-a-year compliance exercise or awareness event. He linked data protection directly with product design, vendor oversight and leadership behaviour.
"As we mark Data Privacy Week 2026, we must recognise that privacy isn't something we can check off our list once a year. It's a fundamental right that requires our constant attention and action. We need to go beyond awareness campaigns and make privacy a core part of everything we do-how we design products and systems, how we manage security and risk, how we choose and oversee vendors, and how we lead our teams and shape our culture," said Ravi Soin, Chief Information Officer and Chief Information Security Officer, Smartsheet.
Australia's National AI Plan sets out expectations for responsible and trustworthy AI deployment. It places privacy at the centre of that approach, including requirements for stronger transparency over data flows and model training.
In that context, Soin said organisations must scrutinise how third-party providers store and process customer data, particularly as AI use spreads across business functions.
"Australia's National AI Plan makes it clear that responsible and trustworthy AI depends on strong data privacy foundations. We therefore must hold vendors accountable for how they secure our data, especially as AI adoption accelerates. Vendors should be transparent about how customer data is accessed, protected, and retained, because customer data belongs to customers. Period. Organisations should have clear control over if and how their data is used to train or improve AI, and vendors should clearly disclose those practices. Australia's AI approach outlines that privacy is a prerequisite for innovation, not a barrier to it. And prioritising data privacy pays dividends: it helps reduce exposure to security threats and data leakage as AI scales, and it reinforces confidence in the organisation, strengthening customer trust," said Soin.
AI and new risks
Security specialists are flagging a parallel shift in threat models as organisations test and deploy generative AI assistants and autonomous agents. They warn that existing weaknesses in data classification, access control and monitoring are now feeding into AI systems that can act at scale.
Proofpoint Vice President, Systems Engineering for Asia Pacific and Japan Adrian Covich said the spread of AI into everyday workflows has already contributed to privacy incidents when users do not understand how tools store and process submitted information.
"As we mark Data Privacy Day, it serves as a crucial reminder of our collective responsibility to safeguard personal information and champion best practices in data protection.
We've already seen how the rapid integration of AI into workflows, often without a full understanding of how data needs to be handled, can contribute to data breaches. In 2026, agentic AI creates a whole new challenge.
According to Proofpoint's own research, 60% of CISOs across the globe are deeply concerned about the potential for customer data loss via public GenAI platforms. This highlights the formidable challenge facing organisations today: how to harness the strategic advantages of AI whilst meticulously protecting privacy and adhering to regulatory frameworks.
As we move towards a future where AI agents become indispensable digital colleagues, this challenge will only intensify. Autonomous AI agents may surpass humans as the primary source of data leaks. Enterprises are rushing to roll out AI assistants without realising they inherit the same data hygiene issues already present in their environments.
Therefore, protecting data in this new paradigm requires a proactive, human-centric approach. This means establishing clear guardrails: a clear policy defining what types of data may (and may not) be inputted into GenAI tools, and use Data Loss Prevention (DLP) or similar tools to monitor, block, or alert on attempts to submit sensitive data into unmanaged 'shadow' AI tools, and train employees on safe AI usage.
On Data Privacy Day, let's commit to embracing a fully integrated approach to data protection, one that understands data classification, user intent, and threat context across all channels, from email to cloud, endpoint, web, and critically, GenAI tools.
Adrian Covich, Vice President, Systems Engineering for Proofpoint in APJ.
Vendor scrutiny
The comments reflect a broader shift in enterprise procurement, where boards and security teams are asking for more granular detail about how vendors collect, store and use data, especially for AI training and product improvement.
Soin's emphasis on transparency aligns with rising expectations that contracts will spell out where customer data sits, who can access it, how long it is retained and whether it feeds machine learning models by default.
Covich's focus on "shadow AI" and unmanaged tools underscores a related concern inside organisations that adopt consumer-grade generative platforms without central controls. These tools often sit outside existing security stacks, which can make monitoring and enforcement harder.
Policy and practice
Security leaders say that written policies alone are not sufficient. They point to the need for technical controls such as data loss prevention, identity management and logging of AI interactions. They also highlight the importance of staff training on what types of information are suitable for AI tools.
Organisations in regulated sectors face additional demands around auditability and evidence of control. They must show how AI usage aligns with privacy law, industry rules and internal risk appetites.
As AI deployments move from pilots to production, CIOs and CISOs are expected to play a central role in aligning privacy, security and business objectives while holding external providers to stronger standards of disclosure and oversight.