Amazon Web Services demonstrated new AI tools that automatically reframe horizontal broadcasts for mobile screens in real-time, targeting broadcasters struggling to serve TikTok-native audiences.
At yesterday's NAB Show floor, AWS engineers showed their Elemental Inference service tracking a basketball game and delivering a vertical feed with just 200 milliseconds of delay. The fully managed service runs AI inference in parallel with live encoding, automatically cropping and reframing horizontal footage to follow the action. No manual camera operator required.
The timing reflects a real problem. Broadcasters have watched their audiences fragment across platforms, with younger viewers consuming sports clips vertically on TikTok and Instagram rather than horizontally on traditional screens. Creating separate vertical feeds has meant doubling production crews or accepting inferior automated crops that miss critical moments. AWS's demonstration suggests a middle path: AI that understands what matters in frame and reframes accordingly.
According to TVTechnology's coverage from the show floor, the Elemental Inference service represents part of a broader push into AI-assisted production workflows. The same NAB Show featured automated clip generation for sports broadcasters and AI-powered talent tracking systems, all targeting the same core problem of producing more content formats with existing resources.
The technical approach appears straightforward: the AI analyzes the live video stream, identifies main subjects and action, then generates optimal crop coordinates that maintain visual coherence while maximizing vertical screen space. AWS hasn't disclosed the model architecture or training data sources.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
How well the system handles edge cases remains unclear. Sports broadcasts follow predictable patterns. Ball goes here, players follow. But live events include crowd reactions, sideline drama, instant replays with multiple angles. The 200-millisecond processing time suggests minimal buffering for error correction.
The service arrives as broadcasters face mounting pressure to serve what one NAB panel called the vertical generation, viewers who hold their phones naturally and rarely rotate them for video. NBC Sports recently reported that 73% of their Olympic clips were viewed vertically on social platforms, despite being shot and edited horizontally.

AWS declined to share pricing details or general availability timeline at the show. The company demonstrated the technology as part of its broader Elemental Media Services suite, which already handles video processing for major broadcasters including Fox Sports and Discovery.
The demonstration also exposed the infrastructure requirements. Real-time AI inference at broadcast scale demands heavy compute resources, particularly for 4K streams. AWS's advantage here is obvious. They own the servers. Competitors would need to either build similar infrastructure or accept higher latency.
Live broadcasters can potentially deliver platform-native content without separate production crews. The 200ms delay enables near-real-time vertical feeds suitable for live streaming. Sports leagues could automatically generate vertical clips during games, not hours later. Local news stations might finally crack mobile-first distribution without redesigning their entire workflow. The infrastructure requirements likely limit this to well-resourced broadcasters initially.
Several other vendors showed similar AI-powered production tools at NAB, suggesting this represents an industry-wide shift rather than a single vendor's innovation. The broader question isn't whether AI will reshape video production. That's already happening. The question is whether automated reframing can match the editorial judgment of human operators who understand not just where the action is, but what story they're trying to tell.
