ChannelLife Australia - Industry insider news for technology resellers
Hooded binary figure ai cyber threats glowing locked data shield

Experts warn AI era demands tougher data protection

Fri, 23rd Jan 2026

Security and privacy specialists have urged organisations and consumers to verify AI outputs, tighten identity controls and reduce data collection as new attack techniques and rising vulnerability volumes put customer data at greater risk.

The commentary comes from executives and practitioners at Testlio, Secureframe and FIRST, a global incident response organisation. Their remarks focus on how AI affects decision-making, social engineering, compliance workloads and incident response.

AI verification

Testlio Founder Kristel Kruustük said people should treat AI systems as unreliable unless they confirm outputs against primary sources.

"We've entered an era where verifying AI-generated outputs is table stakes now. I've reached a point where I fact-check nearly everything AI tells me: the sources, the quotes, the statistics. When I ask any AI chatbot like ChatGPT or Perplexity to give me sources, I'm checking if those sources are actually real. When I ask for quotes, I'm Googling to confirm they exist. Sometimes they don't. This matters for personal safety because AI models are also known to be a people-pleaser. That means if you feed them incorrect assumptions or leading questions, they'll reinforce misinformation rather than correct it. Double verification protects you from acting on fabricated information, whether that's a fake statistic you're about to share at work or a "source" that doesn't exist. Rule: if it affects money, reputation, health, or security, verify with a second, primary source.", said Kristel Kruustük, Founder, Testlio.

Kruustük also pointed to recent examples where AI-generated material has appeared in legal proceedings. She said users should treat AI output as an early input rather than a final result.

"The industry is moving so fast that it's exciting and scary at the same time. We've already seen AI-generated material show up in legal cases and filings, with real consequences when no one verifies it." Courts, employers, and everyday users are all grappling with a question that didn't exist a few years ago: Is what I'm seeing actually true?

"If you're using AI for anything involving your finances, health, career, or personal data, assume the output needs a human gut-check before you act on it. AI is a powerful tool, but it works best when curious, skeptical, and engaged humans stay in the driver's seat. The moment you stop questioning what AI tells you is the moment you put yourself at risk.", said Kruustük.

Identity attacks

Secureframe VP of Information Security Marc Rubbinaccio said attackers often target identities and access tokens instead of exploiting software flaws directly, particularly in cloud environments that rely on third-party services.

"Attackers have figured out that compromising identity is easier than directly hacking the software itself. Stolen credentials, hijacked sessions, and abused API tokens are becoming a reliable way to gain access to systems and exfiltrate data. For companies built on cloud infrastructure and third-party integrations, a single compromised service account or API key can give attackers direct access to sensitive data as if they were to compromise a user account.

The mindset organizations need to have in 2026 is treating every login, token, and OAuth grant as a potential attack vector. Short-lived credentials, least-privilege access, and continuous monitoring are required controls when protecting customer data when managing a modern application.", said Rubbinaccio.

Rubbinaccio also described a shift in phishing and impersonation tactics as generative AI improves the quality of fraudulent communications.

"Phishing is already becoming superpowered through the use of AI. In 2026, we'll see AI-powered social engineering attacks that are nearly indistinguishable from legitimate communications. With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deep fake videos representing people they are not. The next wave of defense will require specific training related to the new techniques attackers are using as well as technology improvements such as behavior-based detection and real-time identity verification.", said Rubbinaccio.

Compliance load

Secureframe CEO Shrav Mehta said organisations often assign limited staffing to compliance work, even as threats increase and reporting burdens rise.

"93% of companies say security is a top priority, yet 68% leave one or fewer full-time employees to handle compliance while AI-powered attacks surge. Teams are spending eight-plus hours a week on paperwork instead of protecting customer data, and manual compliance models are breaking down when the stakes are highest.

For lean teams facing AI-driven threats, the only sustainable path forward is continuous compliance and automation that generates evidence in the background, so your people can focus on actual privacy and security protocols,", said Mehta.

Mehta also argued that many large incidents still stem from basic security failures and from holding customer data that organisations do not need.

"The biggest breaches of 2025 came from preventable failures: reused passwords, unmonitored vendor access, and data that should never have been collected in the first place. When 16 billion credentials leak in a single event, it's a wake-up call that the fundamentals still matter most. Organisations need to ask themselves a hard question: if you don't need to store certain customer data, why are you collecting it? Data minimization isn't just good privacy hygiene, it's risk reduction,", said Mehta.

Vulnerability volume

FIRST Liaison and Lead Member of FIRST's Vulnerability Forecasting Team Éireann Leverett said defenders should plan for a higher rate of reported vulnerabilities and rethink prioritisation.

"We're forecasting nearly 60,000 new vulnerabilities in 2026, and it's entirely possible we will hit 70,000 to 100,000. Every one of those is a potential doorway to your organization's sensitive data, and no single security team can patch them all. The question organizations need to ask right now is: are my people and processes ready to handle this volume, and am I prioritizing the vulnerabilities that actually put my data at risk? Forecasting lets defenders stop reacting to every new CVE and start making strategic decisions about where to focus limited resources before attackers exploit the gaps.", said Éireann Leverett.

Resilience networks

FIRST CEO Chris Gibson said incident response should extend beyond restoring services. He pointed to persistent access risks and the need for coordinated response across organisations.

"Too many organizations treat a breach as 'resolved' the moment systems come back online, but failing to fully cleanse systems and validate what data was stolen leaves attackers with persistent access for months or years. The fundamentals of protecting sensitive data still matter most: segmenting networks, enforcing multi-factor authentication, and ruthlessly retiring old credentials before they become backdoors. But here's what most organizations miss: no company can solve data breaches and cybersecurity in isolation. The organizations that recover fastest are the ones with trusted networks already in place, sharing threat intelligence and coordinating response before a crisis hits.", said Gibson.

Privacy trade-offs

FIRST Transportation & Mobility SIG Chair and Cybersecurity Consultant at Diconium Ionut Mihai Chelalau said consumers and businesses face difficult choices as connected services expand.

"Privacy, as most people understand it, cannot truly exist in today's connected ecosystem. Every time you use an AI assistant, some of your data will 'leak' into training datasets, and despite claims of anonymization, device fingerprints and usage patterns leave identifiable traces. The uncomfortable truth is that customers worldwide are willingly trading privacy for convenience, and unless strong regulations force the issue, manufacturers won't voluntarily cut into profit margins to protect data they can monetize.", said Ionut Mihai Chelalau.

System complexity

FIRST Standards SIG and Time Security SIG Lead and Founder at Proper Tools Trey Darley said security teams should reduce complexity and design systems that degrade safely under failure conditions.

"AI in security has a fundamental thermodynamic problem: every tool we add increases system complexity faster than it increases our ability to coordinate that complexity. As foundation models scale past trillions of parameters, we're hitting Gödelian limits - verifying alignment across all possible states becomes formally undecidable, not merely NP-hard.

In 2026, organizations will realize they've crossed a Rubicon of complexity. The answer isn't more training or more tools, it's simpler systems that fail safely. Reduce complexity, reduce attack surface, and reduce cognitive load on the human. Security that depends on human perfection is security destined to fail.", said Trey Darley.

Communicating breaches

FIRST Principal Communications Advisor Hadyn Green said organisations should plan communications channels in advance and prioritise clear information about customer data during incidents.

"When a breach hits, silence about what happened to customer data creates a vacuum that speculation and misinformation fill fast. Organisations should establish backup communication channels across multiple networks and consider letting trusted authorities speak on their behalf. Not to dodge accountability, but to ensure accurate information reaches affected users while your team focuses on containment. The hardest problem in cybersecurity isn't the technical response, it's getting people to trust and act on what you're telling them about their data.", said Hadyn Green.