Conifers expands AI cyber tools with clear oversight
Conifers has expanded its CognitiveSOC platform to introduce new governance and transparency capabilities for AI-driven security investigations, as organisations seek greater oversight of automated decision-making in cyber defence.
The update focuses on making AI-led investigations more auditable and aligned with internal security practices. It comes as enterprises and managed security service providers increase their use of automation to handle growing volumes of security alerts.
Platform update
The revised platform is designed to move beyond basic alert triage. It instead carries out multi-stage investigations using data drawn from each organisation's historical investigations, analyst workflows, and risk tolerance.
Conifers said the system adapts to how individual security teams operate. It analyses past investigations and incorporates analyst behaviour into its processes. This allows it to replicate established investigative approaches rather than relying on predefined automation rules.
"AI in the SOC can't be a black box," said Tom Findling, Chief Executive Officer and Co-Founder, Conifers.ai. "Security teams need investigations that reflect how their organization operates and clearly explain the reasoning behind every conclusion. With this expansion of CognitiveSOC, we're bringing transparent, governed AI investigations to the SOC so teams can confidently scale investigations without sacrificing control or accountability."
Transparency focus
A central feature of the update is the introduction of detailed evidence chains and reasoning traces. Each investigation step is recorded, allowing analysts to review how conclusions were reached.
This approach is intended to address concerns about opaque AI decision-making. Security teams can examine the logic behind automated findings and validate them before taking action.
The platform also provides auditable decision records. These records are designed to support compliance requirements and internal governance standards. Organisations can use them to demonstrate how incidents were investigated and resolved.
Human oversight
The system retains a strong emphasis on human involvement. Analysts remain responsible for reviewing and validating AI outputs rather than delegating decisions entirely to automation.
Recommendations generated by the platform are accompanied by explanations. Analysts can challenge or refine these outcomes, ensuring that the final decision aligns with internal policies.
Users can also provide feedback on investigation results. This feedback is incorporated into the system's learning process, enabling it to adjust to evolving operational needs.
Adoption model
Conifers has introduced a staged implementation approach to support gradual adoption of AI within security operations centres.
Teams can run AI-driven investigations alongside traditional human-led processes. This allows organisations to compare outcomes and assess accuracy before increasing reliance on automation.
The approach is intended to reduce resistance to AI adoption. Security teams can expand usage incrementally as confidence in the system grows.
The model also enables organisations to tailor the system to their specific requirements. Analysts can influence how the platform evolves by providing input on its performance and decision-making.
Additional tools
The update includes an interactive feature that allows analysts to query investigation results. This tool is designed to help users explore findings in greater depth and accelerate response times.
Governance controls have also been added. These controls allow organisations to set boundaries on how investigations are conducted, ensuring alignment with internal policies and regulatory requirements.
The platform's emphasis on explainability and auditability reflects broader industry trends. As AI becomes more embedded in security operations, organisations are seeking systems that provide both efficiency and accountability.
Industry context
Security operations centres face increasing pressure from the volume and complexity of cyber threats. Many organisations have turned to AI to improve response times and reduce manual workloads.
However, concerns remain about the reliability and transparency of automated systems. Security teams must balance efficiency gains with the need for oversight and control.
Conifers' latest update reflects this shift. By focusing on transparency and governance, the company is positioning its platform as a tool that supports, rather than replaces, human analysts.
The company has previously been recognised in industry analysis of AI-driven security tools, highlighting growing competition in this segment.
"Security teams need investigations that reflect how their organization operates and clearly explain the reasoning behind every conclusion," said Findling.