IT Brief Canada - Technology news for CIOs & IT decision-makers
Flux result 7cb9ab4d 53ad 47c9 8495 117f5fe3b519

TrendAI flags agentic AI risks in enterprise deployment

Thu, 26th Mar 2026

TrendAI has published research on the security and governance risks linked to agentic AI in organisations, based on a survey of 3,700 business and IT decision-makers across 23 countries.

The study points to a widening gap between the pace of AI deployment and the controls organisations have in place to manage it. It found that 67% of respondents had felt pressure to approve AI despite security concerns, while one in seven said those concerns were "extreme" but were overridden to keep up with competitors and internal demand.

Governance structures are also struggling to keep pace. Some 57% said AI is advancing faster than they can secure it, and 55% reported only moderate confidence in their understanding of the legal frameworks governing AI.

Policy development appears uneven as well. Only 38% of organisations said they had comprehensive AI policies in place, while 41% cited unclear regulation or compliance standards as a barrier.

Agent Risks

The research also focused on agentic AI, or systems that can act with a degree of autonomy inside business environments. Less than half of respondents, 44%, said they believe agentic AI will significantly improve cyber defence in the short term.

Access to sensitive data emerged as the most commonly cited concern, with 42% of organisations saying AI agents accessing sensitive data was their biggest risk.

Other risks were also prominent. More than a third of respondents, 36%, said malicious prompts could compromise security, while 33% pointed to a growing attack surface for cyber criminals. Another 33% cited concerns about abuse of trusted AI status and risks linked to autonomous code deployment.

The report found that 31% of organisations lack observability or auditability over AI agents, suggesting many businesses may struggle to track how these systems operate or intervene once they are in use.

Control Questions

The survey highlighted uncertainty over how organisations should retain control of autonomous systems. Around 40% supported introducing AI "kill switch" mechanisms to shut down systems in the event of failure or misuse, while nearly half said they were unsure.

The split reflects a broader lack of consensus over how to govern increasingly autonomous systems inside enterprise networks. Deployment decisions are often being made before operating rules and lines of responsibility are fully established.

TrendAI also pointed to separate threat research showing attackers are using AI to automate reconnaissance, intensify phishing campaigns and lower the barrier to entry for cyber crime. This, it said, is increasing the speed and scale of attacks facing organisations.

Rachel Jin, Chief Platform & Business Officer and Head of TrendAI, said the issue is not a lack of awareness among companies.

"Organisations are not lacking awareness of risk, they're lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely. This research reenforces our focus on helping organizations drive solid business outcomes with AI while still managing business risk," Jin said.

She added that the survey shows concern around more autonomous AI systems is already established.

"Agentic AI is moving organizations into a new risk category. Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organizations are deploying systems they don't fully understand or govern, and that risk is only going to increase unless action is taken," she said.