IT Brief Canada - Technology news for CIOs & IT decision-makers
Security experts ops room governance autonomous ai agents cloud shield

Cloud Security Alliance launches CSAI for agentic AI

Wed, 25th Mar 2026

Cloud Security Alliance has launched CSAI, a non-profit foundation focused on AI security and safety. The new body is intended to govern security and trust in autonomous AI agent systems.

CSAI will focus on what the organisation calls the "agentic control plane", covering identity, authorisation, orchestration, runtime behaviour and trust assurance for AI agents. It creates a separate entity for work that grew out of the group's existing AI Safety Initiative, which has produced research papers, open source projects and certification schemes.

The foundation starts with six programme areas: an AI Risk Observatory, guidance on agentic best practice, education and credentialing, an executive trust programme, assurance work based on existing control frameworks, and research focused on emerging risks in advanced AI systems.

One strand is designed to monitor threats and vulnerabilities in agentic AI environments. It will include observing activity across OpenClaw and MCP server ecosystems, operating a CVE Numbering Authority focused on agentic AI, and collecting telemetry linked to structured risk identifiers.

Another programme provides lifecycle guidance for organisations deploying AI agents. It covers identity controls for non-human actors, runtime authorisation, privilege governance, standards for classifying agents, and security practices for agentic transactions and payments.

The foundation is also extending training and certifications. It will add three tracks to the Trusted AI Safety Expert certification programme for senior executives, security practitioners and school students. It also plans to support workforce development through surveys and chapter activity.

Executive Focus

A separate initiative, called CxOtrust for Agentic AI, is aimed at senior technology and security leaders. The programme will offer monthly briefings, private roundtables for CISOs, CIOs and CAIOs, board-level risk narratives, and guidance for enterprise adoption.

Assurance is another core part of the structure. CSAI plans to expand STAR for AI, a programme built on the AI Controls Matrix and mapped to standards including ISO 42001, ISO 27001 and SOC 2. That work will be supported by Valid-AI-ted, an audit engine for governance, risk and compliance automation and continuous evaluation of agent behaviour.

Its research arm will also look further ahead. Planned work includes CSA Pod, described as a live environment for agent interaction and telemetry, as well as TAISE-Agent Certification, which would apply certification concepts to autonomous agents through behavioural evaluation and trust profiles. Another project, the Catastrophic Risk Annex, will study threats from more advanced future AI systems.

Jim Reavis, chief executive and co-founder of Cloud Security Alliance, outlined the rationale for the launch in a statement accompanying the announcement. "The agentic era demands a new kind of security infrastructure - one that governs not just what AI models can do, but how autonomous agents identify themselves, what they're authorized to do, and how we can trust their behavior at scale. CSAI is purpose-built to deliver that infrastructure through six integrated programs spanning risk intelligence, best practices, education, executive trust, global assurance, and forward-looking research," he said.

The launch reflects a wider shift in cyber security discussions as companies move from using standalone AI models to deploying software agents that can act across internal systems and third-party services. In that model, identity, permissions, runtime controls and auditability become central governance issues rather than secondary concerns.

Cloudflare is among the organisations backing the effort. Stephanie Cohen, chief strategy officer at Cloudflare, said the rise of AI use inside organisations has increased concerns around unmanaged tool use and excessive access rights for agents. "Cloudflare is honored to take part in the launch of the CSAI Foundation to help combat one of the greatest challenges in the AI era: the trade-off between the speed of innovation and security control. As AI adoption accelerates, organizations are struggling with unmanaged employee use of AI tools, and agents operating with over-privileged access. Today, we are reinforcing our commitment to plugging this safety gap, and enabling secure, scalable AI for the modern enterprise," she said.

Standards Link

Alongside the launch, the group also announced a collaboration with the Coalition for Secure AI. The arrangement includes a seat for Cloud Security Alliance on CoSAI's Technical Steering Committee, giving it a role in technical workstreams and standards discussions around agentic AI security.

That link with a broader industry body points to one of the main challenges in this area: avoiding fragmented approaches as different suppliers and users build agent systems. Common taxonomies, trust models and assurance methods are likely to be important if AI agents are to interact across platforms in regulated sectors and large enterprises.

Phil Venables, venture partner at Ballistic Ventures and former CISO of Google Cloud, backed the creation of the new body. "The launch of the CSAI Foundation is an important expansion of the CSA mission to have a dedicated focus on AI safety and security. Their agile delivery of research solutions is a strong complement to the work of standards bodies and trusted technology partners," he said.

Omar Santos of Cisco, project governing board co-chair of CoSAI, framed the partnership as part of a broader effort to align security work around autonomous systems. "This collaboration marks a great step toward a unified approach to securing agentic AI systems. By working with the Cloud Security Alliance, we're aligning efforts to ensure that security and trust remain at the core of the AI ecosystem. Together, CoSAI and CSA are driving the frameworks that will safeguard the next generation of autonomous intelligence," he said.