IT Brief Canada - Technology news for CIOs & IT decision-makers
Story image

AI Agents - The struggle to balance automation, oversight, & security

Yesterday

Visa's recent announcement to harness agentic artificial intelligence (AI) for automatically transacting payments on behalf of customers has attracted widespread interest and scrutiny within the technology and security communities. The move, reported by Associated Press, signals a step change in how everyday purchases could be managed in the near future, promising to reduce both friction and manual intervention in digital commerce.

James Sherlow, Systems Engineering Director EMEA at Cequence Security, observes that Visa is "betting on AI agents to remove the friction and mundanity of regular purchases by using the technology to hunt for, select and pay for goods and services automatically." He notes that, amid the current climate of multi-level authentication processes, such innovation may prove both groundbreaking and beneficial in deterring fraudsters. However, Sherlow highlights significant hurdles regarding consumer acceptance: "The question remains whether the user will be comfortable giving AI that level of autonomy."

Sherlow elaborates on the technical aspects, explaining that Visa intends its AI agents to initially recommend purchases based on learned patterns and preferences, before moving towards more autonomous decision-making. Security remains paramount, with verification to be managed by Visa in a manner analogous to ApplePay, yet now underpinned by AI agents and with Visa handling disputes. He cautions that using AI agents with sensitive personally identifiable information (PII) and payment card industry (PCI) data "could have far reaching ramifications." Clear visibility, accountability, and robust guard rails must be built from the outset, stressing the evolving role of API security, especially as API endpoints become critical to both ecommerce and AI utilisation.

Echoing these concerns, information security practitioners point to the risks inherent in delegating decision-making to semi-autonomous systems. Joshua Walsh, Information Security Practitioner at rradar, believes agentic AI offers dramatic gains in productivity and efficiency by automating complex tasks. Still, "this same autonomy also brings serious security and governance risks that must be addressed before deployment to the live environment," he states. Because AI agents operate across multiple platforms and often without direct human oversight, vulnerabilities such as prompt injection or misconfiguration carry disproportionately high risks, potentially leading to compromised data or even regulatory breaches.

Walsh underscores accountability as a core issue: "When an agent makes a bad call or acts in a way that could be seen as malicious, who takes responsibility?" He advocates for human-in-the-loop safeguards for high-risk actions, strict role-based access controls, rigorous audit logging, and continuous monitoring—especially where sensitive data is involved. Walsh argues that deploying such capabilities safely requires a foundation of transparency and meticulous, sustained testing before production rollout.

Within the broader debate on agentic AI, there is also scepticism about overestimating its capabilities. Roberto Hortal, Chief Product and Technology Officer at Wall Street English, warns that "the promise of AI agents is tempting," but urges caution: "Agents aren't a silver bullet. They're only effective when built with clear goals and deployed with human oversight." Hortal points out that unsupervised use often results in "AI slop," an abundance of low-value output that increases rather than decreases human workload. He draws a parallel to onboarding untested staff, stating, "You wouldn't let a brand-new intern rewrite your strategy or email your customers unsupervised. AI agents should be treated the same." Hortal emphasises the value of keeping AI tightly scoped and always supportive, not substitutive, of human decision-making.

Gartner's latest research indicates that so-called "guardian agents" will account for up to 15% of the agentic AI market by 2030, reflecting the heightened importance of trust and security as AI agents proliferate. Guardian agents, according to Gartner, are designed for "trustworthy and secure interactions," acting both as assistants for content review and autonomous overseers capable of redirecting or blocking AI actions to ensure alignment with predefined objectives. In a recent webinar, 24% of CIOs and IT leaders reported already deploying multiple AI agents, while the majority are either experimenting or planning imminent adoption.

As agentic AI gains traction across internal administrative and customer-facing tasks, risks including data poisoning, credential hijacking, and agent deviation have come to the fore. Avivah Litan, VP Distinguished Analyst at Gartner, comments, "Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails." With the rapid evolution toward complex, multi-agent systems, traditional human oversight is becoming impractical, further accelerating the need for automated, intelligent checks and balances.

Gartner recommends organisations categorise guardian agents into three primary types: reviewers (verifying AI-generated content), monitors (tracking agentic actions for follow-up), and protectors (automatically intervening to adjust or block actions as needed). Integration of these roles is expected to become a central pillar of future AI systems, with Gartner predicting that 70% of AI applications will utilise multi-agent approaches by 2028.

The debate on agentic AI thus hinges on balancing automation, oversight, and security at unprecedented scale. Visa and other firms setting the pace in this new domain will need to combine technological innovation with careful risk management to achieve both user adoption and operational resilience.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X