IT Brief Canada - Technology news for CIOs & IT decision-makers
Editorial office security automation agents cloud risk warnings locks

Nudge Security adds AI agent discovery for workplace risk

Wed, 25th Mar 2026

Nudge Security has introduced AI agent discovery features for its security platform, aiming to give security teams visibility into AI agents created by employees.

The new features identify AI agents built across platforms including Microsoft Copilot Studio, Salesforce Agentforce and n8n. They show what those agents can access, who created them and which systems they connect to, while also highlighting issues such as publicly accessible agents, hardcoded credentials, unauthenticated MCP connections, high-risk integrations and orphaned agents.

The launch comes as companies grapple with rising employee-led use of AI tools and custom agents inside corporate systems. Security teams have been trying to track how these systems are deployed, what permissions they receive and whether they expose internal data or business applications to misuse.

Nudge Security's approach centres on discovering agents when they are created, then linking that information to existing signals on access, integrations, identity and user behaviour. This is intended to help security and IT teams assess risk and contact the employees responsible for each agent to gather more context about its purpose and use.

Growing concern

Agentic AI has become a major focus for corporate technology buyers, but it has also raised new concerns for security teams. Nudge Security cited figures showing that nearly half of security professionals view agentic AI as their top security concern, while 80% of organisations say they have already encountered risks tied to improper data exposure and unauthorised system access.

That concern reflects how AI agents are being adopted inside businesses. Staff can now build task-specific agents with relatively little friction through low-code and no-code tools, but those agents may be granted broad access to files, collaboration platforms, customer records or internal software without central review.

The software inventories an agent's permissions, resources and connections, then applies policy controls designed to prompt employees to explain the agent's purpose, justify its use and address identified risks. For existing customers already connected to software environments such as Salesforce and ServiceNow, the feature does not require additional deployments.

Broader platform

The launch adds to the company's wider set of AI governance tools, which already includes discovery of shadow AI apps, users and integrations across AI providers, as well as MCP server connection discovery, AI data flow visualisation and sensitive data sharing detection.

Nudge Security has positioned the new feature as an extension of its focus on what it calls the workforce edge, meaning the point at which employees choose and use software and AI services in day-to-day work. It argues that visibility into employee behaviour gives security teams a clearer picture of where AI-related risks begin, especially when systems are adopted outside formal procurement or IT processes.

That stance reflects a broader shift in cyber security markets. Vendors are increasingly trying to address not only technical weaknesses but also the governance gaps that emerge when staff adopt new cloud and AI services faster than policies and controls can be updated. In that context, tools that identify ownership, access and usage patterns are becoming more central to risk management.

Founded in 2021 by Russell Spitler and Jaime Blasco, Nudge Security is backed by Cerberus Ventures, Ballistic Ventures, Forgepoint Capital and Squadra Ventures. The company focuses on software-as-a-service and AI security governance for businesses dealing with decentralised technology adoption.

Russ Spitler, Chief Executive Officer and Co-Founder of Nudge Security, said early visibility is a strategic advantage for security teams dealing with AI agents.

"The security teams that build a real inventory of their AI agents now, with actual risk visibility and clear accountability, will put their organizations in a fundamentally advantaged position," said Spitler.

"Our AI agent discovery lets teams embrace AI innovation while also addressing the new risks these agents introduce," he added.