ChannelLife Australia - Industry insider news for technology resellers
Story image

From deepfakes to ransomware: The key trends which will shape IT security in 2025

Mon, 9th Dec 2024

As the coming year unfolds, Australia's cyber security landscape will continue to evolve in response to new technological advancements and emerging threats.

Organisations will face increasing challenges in safeguarding critical systems and data against both traditional and new forms of attacks. Security professionals will be tasked with navigating a complex array of trends, from AI-powered bots to regulatory demands.

Some of the key issues in IT security to watch during 2025 include:

  • The battle of the bots:

During the coming year, automation will play a transformative role in IT security. Bots will prove essential in helping Australian IT security teams manage expansive workloads, detect anomalies, and respond to threats.

These automated tools will allow organisations to maintain strong security postures without requiring significant increases in staffing. Bots can execute tasks such as log analysis, intrusion detection, and real-time response to potential breaches, offering speed and accuracy unmatched by human teams.

However, this technological edge is not exclusive to defenders, as cybercriminals are also leveraging bots and AI to their advantage. Generative Pre-trained Transformers (GPTs) and other advanced AI tools are being sold on the Dark Web, enabling attackers to create malware, phishing campaigns, and social engineering schemes with remarkable efficiency. 

This dual-use nature of AI underscores the critical importance of vigilance, ethical AI practices, and the continuous development of defensive strategies. For security teams, the challenge lies not only in harnessing bots effectively but also in anticipating and countering the innovative ways attackers deploy the same technology.

Also, rather than AI bots replacing humans, it will be humans skilled in prompting those bots who will be most in demand. Australian workers will need to ensure they upskill to take advantage of this trend. 

  • Deepfakes and trust:

The rise of deepfakes is a sobering reminder of the vulnerabilities inherent in today's digital communication channels. An incident involving a Hong Kong-based firm in early 2024 illustrates this threat's severity.

Fraudsters used deepfake technology to impersonate a company's CFO and other executives during a video call, convincing an employee to transfer US$25 million. This attack highlights the devastating consequences of deepfake technology when deployed maliciously.

Unfortunately, deepfakes are becoming easier to create due to accessible AI tools and improved techniques for synthesising voice and video. A simple voicemail recording can provide enough material for attackers to craft convincing fake communications. As these attacks grow more sophisticated, industries beyond finance - such as healthcare, legal services, and government - are increasingly at risk.

Concerningly, according to a recent report from Deloitte1, the generative AI technology used to create deepfakes could enable fraud losses of up to US$40 billion in the United States alone by 2027. That would represent a compound annual growth rate of 32% since 2023.

Efforts to combat deepfakes remain in their infancy. While AI tools for detection are under development, they lag behind the attackers' capabilities. In the interim, businesses are adopting innovative countermeasures. For example, some are requiring verbal passwords during video calls to ensure participants are genuine.

This issue is not confined to corporate environments. Deepfakes have played a role in political disinformation campaigns, such as during the recent US elections, and are likely to play a similar role in next year's Australian federal election. The potential for these tools to disrupt trust in institutions underscores the urgency of developing robust detection and prevention mechanisms.

  • The continuing rise of ransomware:

Ransomware will continue to dominate the threat landscape, posing significant risks to organisations of all sizes. These attacks, in which cybercriminals encrypt an organisation's data and demand payment for its release, can result in substantial financial losses and operational downtime.

In Australia, the challenge is particularly acute in the healthcare sector. Here, organisations are grappling with ageing, legacy IT infrastructure that is challenging to effectively secure. The challenge is further accentuated by a lack of regulatory guidelines that clearly spell out the steps that healthcare providers should be taking.

Despite growing awareness, human error remains the leading cause of ransomware breaches. Employees may inadvertently click on malicious links or fail to recognise phishing emails, opening the door for attackers.

As a result, organisations must prioritise education and training to reduce vulnerabilities. Simultaneously, investments in advanced endpoint protection, network segmentation, and robust backup solutions are essential to minimise the impact of successful attacks.

  • The evolving role of CISOs:

Chief Information Security Officers (CISOs) are undergoing a transformation in their roles and responsibilities. Historically viewed as the guardians of an organisation's digital assets, CISOs must now act as strategic leaders who drive innovation in cyber security. This shift is largely driven by the integration of AI and data analytics into security operations.

To remain effective in 2025, CISOs must embrace upskilling. Mastery of emerging technologies such as AI, machine learning, and predictive analytics is becoming a prerequisite. These tools allow CISOs to identify potential vulnerabilities, respond proactively to threats, and align security strategies with broader business objectives.

In Australia, the CISO title itself is likely to change to incorporate 'AI' and 'Data' to reflect the new responsibilities of this position. 

  • Regulatory challenges around AI and data integrity:

The proliferation of AI technologies and large language models (LLMs) will continue to introduce new regulatory challenges. These tools, while transformative, pose risks related to data integrity, algorithmic bias, and potential misuse.

To address these concerns, regulatory frameworks specific to AI are expected to emerge in 2025. Standards akin to those established by the National Institute of Standards and Technology (NIST) or the International Organisation for Standardisation (ISO) are likely to be developed to guide AI deployment. Such frameworks would provide organisations with practical tools for ensuring compliance and mitigating risks associated with AI-generated outputs.

Preparing for the future
The trends shaping IT security in 2025 reflect the rapid pace of technological change and the growing sophistication of cyber threats. From deepfakes to ransomware, organisations must remain agile and proactive in their defences.

Investing in education, embracing automation, and fostering a culture of innovation are essential steps for navigating this dynamic environment.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X