IT Brief Canada - Technology news for CIOs & IT decision-makers
Modern enterprise datacenter flash storage gpu racks blue light

NetApp unveils EF50 & EF80 flash arrays for AI, HPC

Wed, 18th Mar 2026

NetApp has launched two new all-flash storage systems designed for AI, high-performance computing (HPC), and database workloads where storage throughput and latency can limit effective GPU use.

The EF50 and EF80 are the latest additions to the vendor's EF-Series block storage lineup. Target use cases include AI training and inference, scratch space for HPC, and transactional database environments.

Storage is becoming a more visible constraint in AI infrastructure as organisations scale datasets and GPU clusters. AI training pipelines place heavy demands on sequential reads and writes, and shared environments require predictable performance under contention.

Performance claims

NetApp says the EF50 and EF80 deliver more than 110GBps of read throughput and 55GBps of write throughput, a 250% improvement over the previous generation.

The company also highlighted density and power efficiency, citing 63.7GBps per kW and up to 1.5PB in 2U.

The EF50 and EF80 target deployments where storage is dedicated to a specific workload and must sustain high bandwidth, rather than serving as a general-purpose shared platform. These environments often include GPU servers that depend on fast data feeds and consistent response times.

AI and HPC focus

NetApp pointed to enterprise customers building AI infrastructure and "neocloud" providers. It also cited emerging deployment models such as sovereign AI clouds and AI-powered manufacturing.

The company also referenced parallel file systems commonly used in HPC, saying the EF50 and EF80 can be paired with platforms such as Lustre and BeeGFS. These file systems often sit on high-bandwidth storage for scratch workloads and temporary datasets used during model training or simulation runs.

NetApp said the aim is to reduce the time GPUs spend idle while waiting for data, a cost that can grow as organisations add accelerators faster than they expand storage or networking.

"Data is the key component to delivering business value for enterprises, underpinning performance-hungry workloads like AI or databases," said Sandeep Singh, Senior Vice President and General Manager of Enterprise Storage at NetApp.

"As businesses contend with ever-increasing data volumes and performance-intensive applications such as AI model training, AI inferencing and high-performance computing, they need infrastructure that delivers speed, scalability and efficiency without added complexity," Singh said.

"NetApp delivers a comprehensive portfolio that addresses every stage of the AI data pipeline from collecting and preparing data to feeding it to GenAI models that produce business insights. With the new EF-Series systems, purpose-built for extreme performance, we're enabling customers to deploy and scale high-throughput, low-latency workloads quickly and efficiently, while reducing data center footprint and operational overhead."

Customer and analyst views

Partners and customers used the announcement to highlight demand for throughput and capacity in AI-era infrastructure. CDW said organisations want to increase "raw performance" as they work with larger datasets and more compute-intensive workloads.

"As we navigate the AI era, many enterprises are finding that they need to maximize their raw performance to extract the most value from their data," said Clayton Vipond, Senior Solution Architect at CDW.

"The refreshed NetApp EF-Series deliver the throughput and capacity businesses need to scale high-powered workloads that transform data into insights and outcomes," Vipond said.

Teradata, which sells analytics and data platform software, said storage performance remains important for its most demanding deployments and linked this to modernisation efforts.

"NetApp's EF-Series systems give Teradata the storage performance needed to support our most demanding workloads," said Sumeet Arora, Chief Product Officer at Teradata.

"We appreciate that NetApp continues to invest in this technology, and with the enhanced performance of the new models, we look forward to exploring opportunities to reduce infrastructure complexity and support the AI and data modernization initiatives our customers care about," Arora said.

Industry analyst Omdia tied the refresh to the distinct performance characteristics of AI workloads compared with mainstream enterprise applications.

"By delivering a high-performance storage system that supports parallel file systems like Lustre and BeeGFS, NetApp is making its mark as emerging industries, such as neocloud, emerge to support the AI-Era," said Simon Robinson, Principal Analyst at Omdia.

"Our research validates that AI workloads require a level of raw performance unmatched by any mainstream business workload to date. With the new EF-series systems, NetApp is delivering a solution that addresses the performance needs of large-scale AI projects, whether model training or inference," Robinson said.

Installed base

The EF-Series builds on the E-Series heritage in NetApp's portfolio. NetApp said the platform has more than 1 million installations, which it cited as evidence of durability and reliability.

NetApp said the EF50 and EF80 are intended for organisations that want dedicated, high-bandwidth storage for AI, HPC, and transactional database deployments, including environments built around parallel file systems such as Lustre and BeeGFS.