IT Brief Canada - Technology news for CIOs & IT decision-makers
Modular gpu data center container liquid cooled industrial site

ZEDEDA, Submer unveil liquid-cooled modular edge AI

Wed, 18th Mar 2026

ZEDEDA and Submer have partnered on modular, liquid-cooled edge infrastructure designed to run GPU-based AI inference in remote and industrial locations where conventional data centre builds are impractical.

The joint offering combines Submer's liquid-cooled compute infrastructure with ZEDEDA's edge software platform. It targets deployments in manufacturing, energy, telecommunications and other operational environments that generate large volumes of data outside central facilities.

Edge AI has drawn growing attention as organisations look to process data closer to where it is created. In many settings, sending data back to cloud regions can add latency and network costs, and can create constraints where connectivity is limited or data policies restrict movement.

The partnership focuses on integrated systems deployed in modular configurations. The companies describe three tiers-"pods", "packs" and "containers"-each designed for different site requirements and compute-density needs.

Three form factors

The smallest configuration, pods, targets compact edge deployments and supports up to eight GPUs or edge inference cards per server. Typical environments include industrial sites and 5G telecom locations.

The mid-sized option, packs, is a ruggedised micro data centre that supports up to 168 GPUs. It is aimed at energy, mining, ports and manufacturing settings, where equipment may need to operate in harsher conditions than a controlled server room.

The largest configuration, containers, scales to megawatt-class installations supporting up to 800 GPUs. Container options come in 10-foot, 20-foot and 40-foot formats. Target users include sovereign AI deployments and GPU-as-a-service operators, as well as sites with limited cloud or network access.

Across all three footprints, the systems are designed for AI workloads such as real-time computer vision, predictive maintenance, industrial automation and agentic AI applications. The focus is on running inference locally and making decisions closer to operations.

Cooling and density

Submer provides the infrastructure stack for the modular systems, including liquid cooling approaches such as immersion and direct-to-chip cooling. Its designs support high-density racks exceeding 100kW, suited to GPU-heavy installations where traditional air cooling can become a limiting factor.

Liquid cooling has gained momentum across the data centre sector as power densities rise. It is often positioned as a way to improve thermal efficiency and fit more compute into smaller footprints, though engineering and maintenance requirements can differ from air-cooled designs.

Submer says its liquid cooling reduces cooling energy requirements compared with air-cooled infrastructure and eliminates direct water consumption. It also cites lower carbon emissions than traditional air-cooled facilities.

Software orchestration

ZEDEDA contributes its Edge Intelligence Platform, described as an orchestration layer for provisioning, securing and operating distributed edge deployments. In the combined architecture, it manages the edge AI lifecycle across fleets of systems.

A central design principle is "software-defined resilience", which shifts availability from hardware redundancy toward orchestration and workload placement. Instead of relying solely on spare hardware at each site, the platform detects node failures and redistributes workloads at the cluster level.

The approach aims to reduce excess hardware capacity and simplify deployments in locations where duplicate equipment increases cost and logistical overhead. It also aligns with a broader trend in edge computing: standardising operations across dispersed sites that lack on-site IT staff.

The joint systems will include pre-selected, validated hardware and GPU partners. Customers will also have the option to bring their own hardware, which may appeal to organisations with established supplier relationships or standardised server builds.

Industry focus

The partnership targets industrial and operational settings that generate data away from central data centres, including factory floors, offshore energy installations and telecom aggregation points. These sites can constrain space, power availability and environmental conditions.

In such environments, teams often prioritise ruggedisation, predictable maintenance cycles and remote management. Compute platforms must also align with site safety requirements and local regulations, particularly in critical infrastructure sectors.

"As intelligence moves from the cloud into the physical world, the ability to run AI anywhere - in a remote factory, an offshore platform, or telecommunications networks - is a fundamental requirement. The world's most critical operations generate enormous volumes of data far from any data center, and until now, the infrastructure to act on that data intelligently simply couldn't follow. Our collaboration with Submer makes that possible now," said Said Ouissal, Chief Executive Officer and Founder of ZEDEDA.

"AI is rapidly moving from centralised cloud environments into real-world operations, from industrial sites to telecom networks and remote energy infrastructure," said Patrick Smets, Chief Executive Officer of Submer.

"Delivering that intelligence requires purpose-built AI infrastructure that operates efficiently in environments where traditional data centers simply cannot exist. By combining Submer's liquid-cooled high-density AI infrastructure with ZEDEDA's edge intelligence platform, we're enabling organisations to deploy scalable, resilient AI infrastructure anywhere it is needed," Smets said.

The companies are engaging with initial industrial and telecommunications customers, with pilot deployments expected later this year.