IT Brief Canada - Technology news for CIOs & IT decision-makers
Story image

Red Hat & Meta unite to drive open source AI for business

Yesterday

Red Hat and Meta have announced a collaboration aimed at advancing open source generative artificial intelligence (AI) for enterprise use.

The collaboration began with Red Hat enabling the Llama 4 model family from Meta on Red Hat AI and the vLLM inference server. This initial integration enables businesses to deploy generative AI applications and agents with a simplified process. Both companies plan to continue this effort by promoting the alignment of the Llama Stack and the vLLM community projects, with the goal of creating unified frameworks for open generative AI workloads.

Red Hat and Meta indicated that they are championing open standards to ensure that generative AI applications operate efficiently across hybrid cloud environments, independent of specific hardware accelerators or computing environments. This direction is aimed at creating consistency and reducing costs in enterprise AI deployments.

Mike Ferris, Senior Vice President and Chief Strategy Officer at Red Hat, stated: "Red Hat and Meta both recognize that AI's future success demands not only model advancements but also inference capabilities that let users maximize the breakthrough capabilities of next-generation models. Our joint commitment to Llama Stack and vLLM are intended to help realize a vision of faster, more consistent and more cost-effective gen AI applications running wherever needed across the hybrid cloud, regardless of accelerator or environment. This is the open future of AI, and one that Red Hat and Meta are ready to meet."

According to Gartner, by 2026, over 80% of independent software vendors are expected to have embedded generative AI capabilities in their enterprise applications, compared to the less than 1% observed currently. Red Hat and Meta's collaboration addresses the need for open and interoperable foundations, particularly at the application programming interface (API) layer and within inference serving, which handles real-time operational AI workloads.

Llama Stack, developed and released as open source by Meta, provides standardized building blocks and APIs for the full lifecycle of generative AI applications. Red Hat is actively contributing to the Llama Stack project, which the company expects will improve options for developers who are building agentic AI applications on Red Hat AI. Red Hat has committed to supporting a range of agentic frameworks, including Llama Stack, in order to offer customers flexibility in their tooling and development approaches.

With these developments, Red Hat aims to create an environment that accelerates the development and deployment of next-generation AI solutions, which align with emerging technologies and methods in the sector.

On the inference side, the vLLM project acts as an open source platform supporting efficient inference for large language models such as the Llama series. Red Hat has made leading contributions to vLLM, ensuring immediate support for Llama 4 models. Meta has pledged to increase its engagement with the vLLM community project, aiming to enhance its capabilities for cost-effective and scalable AI inference. The project is also part of the PyTorch ecosystem, which Meta and others support, contributing to an inclusive AI tools environment.

Ash Jhaveri, Vice President of AI and Reality Labs Partnerships at Meta, said: "We are excited to partner with Red Hat as we work towards establishing Llama Stack as the industry standard for seamlessly building and deploying generative AI applications. This collaboration underscores our commitment to open innovation and the development of robust, scalable AI solutions that empower businesses to harness the full potential of AI technology. Together with Red Hat, we are paving the way for a future where Llama models and tools become the backbone of enterprise AI, driving efficiency and innovation across industries."

The collaboration formalises the intent of both companies to bolster open source AI foundations, facilitate interoperability, and expand choice for enterprise customers in building and deploying generative AI solutions across various computing environments.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X