IT Brief Canada - Technology news for CIOs & IT decision-makers
Canada
AI agents expose major API security gap, Salt warns

AI agents expose major API security gap, Salt warns

Fri, 10th Apr 2026
Mark Tarre
MARK TARRE News Chief

Salt Security has published research on AI and API security, warning that most organisations lack mature protections as AI agents spread.

The study surveyed 327 security leaders across technology, financial services, healthcare and manufacturing. It found that 92% of organisations have not reached what Salt classifies as advanced security maturity in environments where AI agents rely on application programming interfaces, or APIs, to carry out tasks.

Almost half of respondents, 47%, said they had delayed production releases because of API security concerns. Nearly a third, 32%, reported an API security incident in the past year, while only 8% said they had reached an advanced level of API security maturity.

API use is rising quickly alongside automation and AI deployment. Two-thirds of respondents, 66%, said the number of APIs used by their organisations had grown by more than 50% over the past year.

That growth is putting pressure on security teams to track and defend a larger, more complex set of connections. Fewer than one in four organisations, 24%, said they had a fully automated API inventory, leaving most to rely on partial or manual methods to identify exposed systems.

Security gap

The findings point to a widening gap between the pace of AI adoption and the security controls used to manage it. Salt describes this as an "Agentic Security Gap", arguing that security teams need visibility not only into APIs but also into the broader set of systems AI agents use, including large language models and Model Context Protocol servers.

Board-level attention to the issue appears to be rising. The research found that 79% of boards and executive teams had increased scrutiny of AI security risks, yet only 18% of respondents said they were extremely confident in their ability to detect attacks using generative AI.

Software development is another area of concern. Nearly 90% of organisations said they already use or plan to use generative AI in API development, a shift that introduces new risks into the development lifecycle if code, access controls and testing are not tightly managed.

Threat shift

The report also highlighted a shift in the source of attacks. Analysis from Salt Labs found that 99% of attack attempts now come from authenticated sources, suggesting malicious activity is increasingly taking place through valid accounts or approved access paths rather than through external intrusion alone.

This trend includes rogue agents operating with legitimate credentials but without human oversight or effective controls. Salt also said that 65% of attacks exploit security misconfigurations, linking this to over-permissioned APIs that can be queried in sequence and used to extract data quickly.

"You cannot secure AI agents without securing every layer they touch, including the APIs they call, the MCP servers they route through, and the data they access," said Roey Eliyahu, Co-Founder and Chief Executive Officer of Salt Security.

"Risk in the agentic era doesn't sit in one place. It lives in how all of those pieces interact in real time," Eliyahu said.

Broader role

Salt argues that API security should now be treated as a separate discipline rather than a subset of application or cloud security. That position reflects the growing role of APIs as the operational layer for AI systems, as well as their long-established role in linking mobile apps, websites, internal systems and third-party services.

The company is promoting what it calls an Agentic Security Graph, a model intended to map the relationships between large language models, MCP servers and APIs. In practice, it is an attempt to understand how AI systems reason, execute and take action across enterprise systems, and where security controls may break down between those stages.

For security leaders, the figures suggest the main issue is not simply AI use, but whether governance and monitoring are keeping pace with deployment. Delayed releases, low confidence in attack detection and incomplete inventories all point to a core visibility challenge in environments where machine-driven interactions are increasing faster than oversight.

"Salt Security was founded on the belief that APIs are the most critical and most overlooked attack surface in the enterprise. As AI agents have emerged, it has become clear that APIs are just one pillar in a much larger, deeply connected system," Eliyahu said.

"Today, we secure the entire agentic environment, the LLM, agents, MCP servers, APIs, and the data they access. Our 1H 2026 research confirms that this isn't a future problem, it's happening now, and most organizations are not ready," he said.