Real-time AI for Live Video Production is Available From AWS Elemental Inference
Live sports broadcasting has the ability to output both a traditional horizontal signal for TVs and crop/reframe the same signal for mobile users (such as TikTok), who view the sport in a vertical format. This was made possible through an unveiling of AWS’s Elemental Inference at NAB 2025. With Elemental Inference, AI processing can be done inside live video encoding workflows.
Elemental Inference allows AI inference to occur in conjunction with standard encoding of video. It will identify key moments and perform cropping for a vertical mobile format. According to The Broadcast Bridge, the service produces vertically formatted outputs in mere seconds, indicating very rapid processing relative to its associated live encoded output.
Elemental Inference uses existing video encoding infrastructure, but unlike many services today, it does not require a secondary post-production pass. Instead, the AI processing is performed in conjunction with live encoding, according to AWS’s NAB demo materials. The main functions of Elemental Inference include intelligent cropping that maintains composition of the subject(s) being tracked when cropping horizontally-formatted video to produce vertically-formatted outputs and detecting events that represent key moments in the video stream.
The automation capabilities extend far beyond simply center-cropping the horizontally-formatted frames of video. As stated by theCUBE Research, the technology transforms live video into real-time AI-based engagement by enabling compositional decisions about which areas of a horizontally-formatted frame to preserve when generating vertically-formatted outputs. Additionally, the system enables tracking of moving subjects and adjusting framing accordingly; however, this feature is particularly important for sports and other live events in which the action on the screen moves laterally across the frame.
The timing of the release is also aligned with increasing demand from the industry to deliver mobile-first audiences without corresponding increases in production cost. While traditional broadcasters compete with natively-vertical platforms, they have been operating their production workflows in support of horizontal formats. Delays caused by manual conversion processes result in content that appears stale by the time it reaches social platforms.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
AWS framed Elemental Inference as embeddable infrastructure and not as a standalone product. Based upon NAB Show documentation, AWS stated that partners are building these AIs into their respective products to decrease production costs and access new audiences efficiently. This partnership model mirrors AWS' typical go-to-market strategy of providing the base technology, allowing third-party developers to build the customer-facing applications.
Additionally, AWS demonstrated Elemental Inference across multiple venues at NAB, indicating a concerted effort to integrate Elemental Inference into the standard broadcast workflow. However, per SiliconANGLE, AWS did demonstrate that Elemental Inference decreases manual effort and expands audience reach. However, no specific data points regarding efficiency gains or increased viewership were included in the available information.

There are several production realities that emerged from the NAB demonstrations. First, live events can now generate platform-specific content without needing dedicated mobile production teams. Second, vertical video generation is no longer limited to post-production; instead, it can now happen in real-time. Third, clip detection is now automatic; however, it is still expected that human editors will need to oversee and approve the final publication decisions. Fourth, integration is occurring at the infrastructure level and not as a requirement of purchasing new production equipment. Fifth, cost models may begin to transition away from being driven primarily by editing personnel and toward being driven by compute resources required for processing.
To date, there has been little public discussion of the limitations of Elemental Inference. For example, questions regarding the effectiveness of the technology in multi-subject environments; how graphics overlays are handled; and what constitutes a "false positive" rate in clip detection were left unanswered in the publicly available documentation. AWS declined to provide technical specifications for Elemental Inference beyond those demonstrated at NAB.
Traditional broadcasters will soon face a decision: scale their human production teams to meet growing demands for multi-platform distribution or entrust AI systems to make editorial decisions in real-time regarding both shot framing and identifying key moments. The most pragmatic solution may lie somewhere between these two options -- utilizing AI for initial processing followed by human review -- although such a hybrid model would sacrifice much of the efficiency benefits of entirely automated systems.
Next year will bring the first test of whether this type of automated content delivery system is reliable under high-pressure conditions such as major live events where production errors can lead to serious consequences. Major sports leagues preparing for their summer seasons will likely implement dual workflows until they determine if they can rely solely on AI systems to select the best vertical crops for their fans viewing them on mobile devices. If the performance of Elemental Inference holds up under these conditions, the first fully-AI generated vertical broadcasts could appear by early Fall sports seasons -- a shift that will dramatically affect the economic model supporting multi-platform content distribution.
