#2025: Increasing Concerns Surrounding Security Risks of Agentic AI

مقالات

Agentic AI and autonomous AI tools that facilitate communication without human oversight are raising significant security concerns, as highlighted by industry experts at a recent conference on information security.

Agentic AI operates with a high degree of autonomy, capable of selecting models, transferring data or outcomes to other AI systems, and making decisions independently of human approval. This represents a paradigm shift from previous AI generations reliant on human prompts. Agentic systems can continuously learn and adapt, raising the stakes in terms of security vulnerabilities.

The integration of various AI components, such as generative AI applications and chatbots—often without adequate oversight—can pose substantial risks. Many organizations are currently adopting these systems in operational areas like code development and system configuration, which may outpace their existing security measures.

Research indicates that only 31% of organizations report complete maturity in their AI implementations. Moreover, companies often lag in establishing adequate governance frameworks in line with AI innovations.

The risks associated with agentic AI are akin to those seen with earlier large language models, including prompt injection, data poisoning, bias, and inaccuracies. The impact of these risks can be amplified when a flawed agent communicates distorted or manipulated data to another system. Even small error rates can result in compounding mistakes across interconnected systems, exacerbating vulnerabilities.

The issue becomes more pronounced when AI tools interface with external data sources, which may be outside the organization’s control. As noted by industry experts, an essential layer of AI security is required, particularly when information is sourced externally.

Given the swift evolution of agentic AI, security teams must rapidly identify and mitigate potential vulnerabilities. According to recent findings, 76% of organizations are either utilizing or planning to implement agentic AI within the coming year; however, only 56% report being moderately or fully aware of the associated risks.

Expert commentary suggests that implementing AI is not a straightforward endeavor but a continuous process requiring governance and control measures to evolve alongside AI capabilities. Thus, organizational leaders must develop robust security strategies to manage the risks associated with agentic AI.

The emergence of agentic systems necessitates that organizations review their cybersecurity policies, particularly as the technology becomes more pervasive in applications like code generation and customer service automation. This push for adopting agentic AI has heightened the need for stringent security protocols to safeguard the integrity of AI-generated outputs.

To ensure the security of interconnected AI systems, businesses must focus on the integrity of the APIs that facilitate their interactions. These technical interfaces serve as critical conduits for data handling, task execution, and cross-platform integration. Without stringent API security, advanced AI systems risk transforming into potential vulnerabilities instead of assets.

The interplay between various AI components represents a new layer of complexity where security gaps could emerge. Hence, conducting thorough security assessments—including red teaming and implementing AI bills of materials—can help organizations maintain visibility over the AI models and datasets they utilize, enabling a better understanding of dependencies and vulnerabilities in their AI infrastructure.