IT Brief Canada - Technology news for CIOs & IT decision-makers
Flux result 6f71d4a0 ce92 48fa 94d4 abc0a3c5f4fc

PyTorch Foundation adds Helion for portable AI kernels

Tue, 7th Apr 2026

The PyTorch Foundation has added Helion as a foundation-hosted project, bringing a kernel authoring tool created at Meta into its open source AI portfolio.

Helion joins projects including DeepSpeed, PyTorch, Ray and vLLM. Its addition makes kernel authoring a more central part of the PyTorch ecosystem as developers work across a wider range of chips, software layers and model designs.

Kernel development has become more prominent as AI work shifts beyond model training into large-scale inference. Engineering teams are increasingly trying to run models across different hardware targets without rewriting low-level code for each platform.

Helion is a Python-embedded domain-specific language for writing machine learning kernels. It is designed to compile to multiple backends, including Triton and TileIR, with the aim of giving developers a higher-level way to write kernels while automating more of the tuning needed to run them efficiently.

The project is intended to reduce manual coding and support portability across hardware environments. The Foundation also highlighted ahead-of-time autotuning, which tests candidate kernel implementations to find an efficient option for a given setting.

Matt White, Global CTO of AI at the Linux Foundation and CTO of the PyTorch Foundation, said Helion reflects a broader need in open source AI development.

"Helion joining the PyTorch Foundation as its newest project reflects where the open AI ecosystem needs to go next: higher-level performance portability for kernel authors," White said.

He added that the project could simplify kernel writing in the PyTorch community.

"Helion gives engineers a much more productive path to writing high-performance kernels, including autotuning across hundreds of candidate implementations for a single kernel. As part of the PyTorch Foundation community, this project strengthens the foundation for an open AI stack that is more portable and significantly easier for the community to build on," he said.

Wider changes

Alongside the Helion announcement, ExecuTorch is becoming part of PyTorch Core. ExecuTorch also started at Meta and focuses on extending PyTorch model functionality for edge and on-device environments.

The change places another Meta-originated project deeper within the PyTorch structure under the Foundation's open governance model. Ecosystem and technical decisions for ExecuTorch will continue to be made in a community-guided way.

These changes highlight how the PyTorch Foundation is expanding beyond the core framework into more specialised layers of the AI software stack. Those layers now span model training, inference and deployment, as well as lower-level tooling intended to ease work across different hardware platforms.

Jana van Greunen, Director of PyTorch Engineering at Meta, said Helion is meant to lower the barrier for developers writing kernels.

"Helion brings kernel authoring into PyTorch, making it simpler, portable and accessible to every developer. Joining the PyTorch Foundation opens Helion to the broader hardware ecosystem, so developers can write one kernel and have it run fast everywhere," van Greunen said.

Mark Collier, Executive Director of the PyTorch Foundation, linked the move to the technical challenges created by differing architectures.

"By bringing Helion into the PyTorch Foundation community, we are meeting the technical frontier of AI head on. The project provides a vital layer of abstraction that makes it easier for developers to target different architectures and accelerate AI adoption. This addition is integral to shaping and fueling production-grade AI across industries," Collier said.

The PyTorch Foundation operates under the Linux Foundation as a vendor-neutral home for the PyTorch framework and related open source AI projects. Its portfolio now includes DeepSpeed, Helion, PyTorch, Ray and vLLM, reflecting a push to support more of the tooling used to build, tune and deploy AI models.