Product Announcements
Lightricks Drops LTX-2: An Open Model With Native Sound and 4K Video
Lightricks’ LTX-2 brings native sound and 4K video generation as an open model, pushing creative tools beyond text-to-video incumbents.

Lightricks Drops LTX-2: First Open Model With Native Sound and 4K Video
The Israeli photo app giant just released what appears to be the new leading open-source AI model, capable of generating synchronized audio and video at cinema-quality levels on a high-end gaming rig.
You can download LTX-2 now and create a 4K video of a "cyberpunk cat playing synthesizer in neon-lit Tokyo" with synchronized music on high-end consumer GPUs.
While OpenAI's Sora 2 and Google's Veo model suite are behind paywalls and APIs. Lightricks joins other models, such as Hunyuan and WAN, to produce open-weight models for local inference.
"It delivers the kind of quality and performance teams usually associate with closed systems, without giving up control, transparency, or the ability to customize," Zeev Farbman, Lightricks' co-founder and CEO, said in the release announcement. The timing is deliberate: NVIDIA showcased the model at CES 2026, releasing optimization guides that confirm it can generate 50 fps video at native 4K resolution—specs that match or exceed those of most proprietary systems.
NVIDIA's tests show LTX-2 can generate short, synchronized video clips locally on powerful consumer GPUs such as the RTX 5090.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
By allowing companies with less than $10 million in annual recurring revenue to use LTX-2 for free, the license significantly lowers the barrier for indie game studios, content creators, and AI video startups to access high-quality generative video tools and makes closed systems far less economically competitive.
"Open releases of multimodal models are rare," Farbman noted during a Reddit AMA this week. "We built LTX-2 to be something you can actually use: it runs locally on consumer GPUs."
The model is based on Lightricks' original LTX Video, released in December 2024, which could only make silent clips. Adding synchronized audio meant a complete redesign, according to the company. However, they have not said where the audio training data came from, which is important given ongoing lawsuits over music generation models.
Lightricks' open-source move challenges the current AI video paradigm, where OpenAI and Google keep their own models inaccessible. With independent labs like Black Forest working on image models, the arrival of freely accessible, production-grade video tools represents a new chapter.
The next test will come from the community. Within hours of release, developers started sharing modified versions optimized for different hardware setups.
Can open-source weights undercut their closed competitors? LTX seems to think so.


