After DeepSeek, another Chinese AI model STUNS everyone, can generate deepfake videos from single photo; its called…

OmniHuman-1, an advanced AI framework by ByteDance -- the parent company of TikTok-- can purportedly generate deepfake videos using a single image.

Published date india.com Published: February 10, 2025 12:13 AM IST
After DeepSeek, another Chinese AI model STUNS everyone, can generate deepfake videos from single photo; its called...
Representational Image

Barely weeks after China unleashed its revolutionary DeepSeek R1 AI chatbot, another powerful Chinese AI model is creating waves in the digital world. OmniHuman-1, an advanced AI framework by ByteDance — the parent company of TikTok– can purportedly generate deepfake videos using a single image.

“OmniHuman-1 is an advanced AI framework by ByteDance that generates realistic human videos from a single image and motion signals, such as audio or video,” according to the description on the OmniHuman website.

Why OmniHuman-1 deepfake generation is revolutionary?

While AI-generated deepfake videos are not exactly a novel idea, however, unlike conventional AI video tools, OminHuman does not dozens of pictures and audio samples of the subject to generate a deepfake video. Instead, ByteDance claims, its AI tool can generate an entire realistic video using a single image.

Additionally, ByteDance’s OmniHuman-1 has the ability to generate a full-body video of a person, unlike other generative AI tools that can only generate the face of the subject. Using a single image, OmniHuman can generate a full-body AI, complete with hand gestures to body movements, and realistic voice patterns, unlike the robotic voices used by other models.

Add India.com as a Preferred SourceAdd India.com as a Preferred Source

How OmniHuman-1 was trained?

According to reports, ByteDance researchers used 18,700 hours of human video data to train the OmniHuman-1 AI model, apart from text, audio and body movement samples. OmniHuman-1 has been trained with Omi Condition, means that like a 3D model, every movement has been captured, making the generated video look absolutely real.

Developers also revealed that they used multiple conditioning signals to train the OmniHuman model– including text, audio, and poses — to minimise data wastage during training.

The OmniHuman AI model is undoubtedly a breakthrough in AI video generation, boasting high realism, versatile inputs, multimodal functionality, and the ability to work with a limited amount of data. However, despite all these pros, the model currently has various limitations, such as its limited availability, and resource intensive computational requirments.

Also Read:

For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Viral News on India.com.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts Cookies Policy.