#2025: A Critical Examination of the AI Arms Race between Cyber Defenders and Attackers

مقالات

Malicious actors are leveraging artificial intelligence (AI) tools to enhance cyber-attacks, paralleling the aggressive push by governments to promote AI investment. National initiatives aimed at strengthening AI proficiency and research and development are crucial, particularly as hackers increasingly adopt AI-driven tools for crafting malware and identifying system vulnerabilities.

During a presentation at a recent security conference, Brett Taylor, the UK sales engineering director at SentinelOne, pointed out the alarming trend where criminal organizations utilize AI to refine their methods, just as public and private sectors aim to harness AI’s potential for productivity and economic advancement.

Taylor mentioned that the UK ranks prominently in global AI investment, committing £14 billion to this technology, which is integral to its industrial strategy and projected to create over 13,000 jobs. However, this investment is dwarfed by the ambitious plans of the United States, which aims to invest over $500 billion in Stargate AI over the next four years. China’s precise investment figures are less transparent, but it leads with its DeepSeek model, the most downloaded free app in the US.

Threat Actors Investing in AI

Taylor cautioned that malicious actors are also making significant investments in AI tools. “Threat actors are investing and innovating too,” he stated, recognizing their pursuit of new market opportunities. Platforms such as WormGPT, EvilGPT, and FraudGPT facilitate malware creation and cybercrime, enabling operatives to harness generative AI for more devastating attacks.

He explained that AI currently transforms security dynamics. Threat actors are pre-testing attacks using generative adversarial networks (GANs) to ensure that only the most promising, high-probability attacks are executed. AI amplifies traditional attack vectors, including phishing and credential theft, making breaches faster and more efficient.

“Previously, a brute-force attack would be the norm for password breaches,” Taylor noted. “Now, attackers analyze personal interests and hunt for leaked passwords to tailor their models to a user’s password structure. This drastically reduces the attempts needed to successfully compromise an account to just a few hundred.”

As mainstream AI vendors bolster defenses against misuse, adversaries resort to specialized, malicious AI tools and express no concern for ethical considerations, driven solely by profit, intelligence acquisition, or theft of intellectual property.

Step Change for Defense

In light of these evolving threats, organizations and governments must enact a substantial change in their defensive strategies. Fortunately, defenders are also harnessing AI technology. Chief Information Security Officers (CISOs) are increasingly adopting automation to manage the ever-growing threat landscape and compensate for skill shortages within their teams.

“Generative AI democratizes access to security analysis,” Taylor remarked, emphasizing the ease that natural language interfaces offer when interacting with advanced security tools. Such tools can autonomously conduct investigations and threat-hunting tasks. For example, one could inquire about the presence of a known threat actor in their network, prompting the system to search for indicators of compromise.

As security operations centers (SOCs) become more automated, the integration of AI is seen as a necessary evolution to cope with the increased speed and volume of cyber-attacks. Over time, experts like those at SentinelOne predict the eventual emergence of fully autonomous SOCs, challenging the skepticism surrounding such capabilities in the past.

The most promising SOC processes for AI and automation application include monitoring, evidence collection, threat investigation, incident triage, response, remediation, and reporting. “An autonomous SOC offers scalability and precision that human-led SOCs find challenging to achieve,” Taylor explained.

The collaboration between human analysts and AI could significantly enhance decision-making efficiency while preventing burnout among human SOC personnel, who currently risk becoming overwhelmed by the volume of threats requiring attention. Analysts will likely shift toward supervisory roles as AI and automation take the frontline in incident response and possible remediation.

With well-funded and innovative adversaries targeting organizations at opportune moments with sophisticated tools, Taylor asserts the necessity for enhanced defensive measures to counteract the escalating tide of cyber threats.