Ion Video unveils virtual structure for AI-driven clips
Ion Video has outlined a plan to change how video is stored and reused by separating a file's structure from its underlying audio and visual data. It says this would let AI systems assemble new video sequences without re-editing and re-rendering the source material.
The Melbourne-based company described a process in which a video's internal structure becomes a small virtual representation while the original media data remains unchanged. That virtual file would guide software to reconstruct sequences of frames on demand, rather than producing a newly rendered file each time a user or system requests changes.
The approach targets a longstanding constraint in digital video. AI tools can already analyse footage and identify objects, scenes, or timestamps. Ion Video argues they still struggle to recombine existing clips into new viewing experiences without the conventional workflow of editing, exporting, and storing new versions.
Virtual structure
The method treats a video as a single fixed asset, but separates its internal structure from the raw data. Ion Video says its patents cover this separation and the creation of a virtual structure that can reference underlying video and audio components without altering them.
In practical terms, a system could assemble a sequence of frames when requested, then discard it once playback ends. This differs from standard workflows, where each modification produces a fresh file, increasing storage demands and computing requirements.
Finbar O'Hanlon, Ion Video's lead innovator, traced the problem to the way video files were designed. He argued that advances in streaming and cloud services did not change the core format of video as a finished asset.
"Traditional video files are designed as completed, rendered assets. Once produced, they become static objects intended purely for playback and distribution. Platforms can compress, stream and analyse them, but they cannot easily manipulate or recombine their internal components without creating entirely new files," O'Hanlon said.
From Linius
O'Hanlon's work on video virtualisation dates back more than a decade. In 2009 he filed a series of patents that later fed into the creation of Linius Technologies, which went on to list on the ASX. Ion Video now presents its work as a re-engineered and expanded version of that earlier concept, with additional patents reflecting how AI is being used to query and generate media experiences.
He argued that AI systems work best with data that can be recomposed dynamically. In his view, text, images and code fit that model, while video largely does not.
"Once video is rendered, it becomes a sealed object. It was designed for playback and distribution, not for intelligent systems to reorganise, recombine or compose with," O'Hanlon said.
Prompted video
Ion Video positioned the technology as a way for AI to assemble content in response to instructions. One example described a user request for "five Asian recipes under $15". The company said an AI system could scan multiple cooking videos, identify relevant scenes and assemble a sequence containing the required steps. A follow-up request could remove commentary and retain only cooking actions and finished dishes.
The company says the key point is that the system would not create a new rendered file for each variant. Instead, it would reference existing data and arrange it through the virtual structure.
"Instead of editing or generating new files, an AI model can dynamically assemble content based on instructions or prompts," O'Hanlon said.
Infrastructure focus
Ion Video sought to distinguish its business model from consumer platforms. It does not plan to build a direct-to-user video destination. Instead, it described its technology as infrastructure that sits beneath existing video ecosystems.
"Our IP and technology is an infrastructure layer that sits beneath existing video ecosystems. The aim is to help enable hyperscale cloud providers, AI developers and streaming platforms to innovate and integrate intelligent programmable video capabilities," O'Hanlon said.
The company linked its pitch to the scale of global video consumption. It cited an estimate that video accounts for 82% of all internet traffic and pointed to the volume of new uploads to platforms such as YouTube. These numbers are commonly referenced in industry discussions about bandwidth and storage, though underlying estimates vary by methodology and time period.
Cost claims
Ion Video said the economics would depend on reducing repeated transcoding, storage and compute work across large video libraries. O'Hanlon said the company believes it can reduce such processing costs significantly in some scenarios.
"We believe we can save upwards of 70% in transcoding, storage and compute costs for processing video," O'Hanlon said.
He also outlined a licensing approach aimed at large cloud providers, with fees linked to what he described as "enablement value". O'Hanlon referenced projected infrastructure spending at Alphabet and argued that even small percentage reductions in video-related expenditure could be meaningful at hyperscale.
Ion Video said it has filed additional patents to extend its intellectual property around video virtualisation. O'Hanlon said the timing matters as AI systems increasingly interpret user intent and generate personalised digital experiences. "For the first time, AI can work with video the way it works with text," he said.