ChannelLife Australia - Industry insider news for technology resellers
Modern office building split ai digital patterns and cracked security shields

Growing gap revealed between AI innovation & enterprise security

Today

Cobalt has published the State of LLM Security Report 2025, highlighting a growing gap between generative AI adoption and the security measures needed to protect enterprises.

The report finds that 36% of security leaders and practitioners acknowledge that the evolution of generative AI (genAI) is outpacing their teams' capacity to secure it, as organisations increasingly embed AI into fundamental business processes.

Heightened concern amongst security professionals has prompted many to call for a temporary slowdown. The research indicates that 48% of respondents support a "strategic pause" to allow time for defensive measures against genAI-driven threats to be recalibrated. Despite this, there are no indications that such a pause will take place.

"Threat actors aren't waiting around, and neither can security teams," said Gunter Ollmann, Chief Technology Officer at Cobalt. "Our research shows that while genAI is reshaping how we work, it's also rewriting the rules of risk. The foundations of security must evolve in parallel, or we risk building tomorrow's innovation on today's outdated safeguards."

The State of LLM Security Report 2025 presents several statistics that illustrate both the state of readiness and the challenges facing organisations. According to the findings, genAI-related attacks are now the primary IT risk for 72% of professionals surveyed, yet 33% of those respondents are not conducting regular security assessments, including penetration testing, for their large language model (LLM) deployments.

The report also identifies a growing lack of confidence in the AI supply chain, with 50% of respondents seeking greater transparency from software suppliers regarding how they identify and manage vulnerabilities. This reflects a broader trend in which trust and security assurances become increasingly important as AI becomes more integrated into enterprise systems.

A distinction emerges between security leaders and practitioners regarding their respective concerns about genAI. The report finds that 76% of security leaders—those at C-suite and Vice-President level—are more concerned about long-term genAI threats, such as adversarial attacks. This is compared to 68% of practitioners expressing similar concerns. Conversely, when assessing near-term operational risks such as inaccurate model outputs, 45% of practitioners indicate concern compared to 36% of security leaders.

The most cited concerns about genAI deployment among all survey participants include the risk of sensitive information disclosure (46%), model poisoning or theft (42%), and training data leakage (37%). These risks highlight a broader need to ensure the integrity and security of AI-driven data pipelines.

The report also examines the outcomes of penetration testing across multiple organisations. It reveals that while 69% of serious vulnerabilities discovered through testing are ultimately resolved, this rate drops substantially to just 21% for high-severity vulnerabilities in LLM-specific tests. The report notes that 32% of findings in these tests are classified as serious, reflecting the lowest resolution rate for any test category reviewed by Cobalt.

The disparities identified in remediating vulnerabilities, particularly in environments where AI plays a central role, highlight a significant gap in security practices. This is especially notable as organisations continue to accelerate the deployment of generative AI tools in daily operations.

"Much like the rush to cloud adoption, genAI has exposed a fundamental gap between innovation and security readiness," Ollmann added. "Mature controls were not built for a world of LLMs. Security teams must shift from reactive audits to programmatic, proactive AI testing—and fast."

The report is based on an analysis of data collected from Cobalt penetration tests across more than 2,700 organisations, supplemented by a third-party survey conducted by Emerald Research. The data provided for independent review was anonymised before being given to Cyentia Institute for analysis.

These findings suggest that despite significant awareness of genAI risks, there remains a disconnect between the speed of AI adoption and the implementation of comprehensive security measures, as organisations weigh the imperative for both innovation and protection.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X