Tencent Launches HunyuanPortrait an Open-Source AI Model for Animating Portraits
Imagine the transformation of a dusty old photograph into a moving and living portrait. That is now an actual possibility due to Tencent’s HunyuanPortrait-AI. Not unveiled until Tuesday, this deepfake generator is unlike any other. HunyuanPortrait, the largest language model on a diffusion architecture, stains still images with eerie lifelike animations. Give it a picture and a video for reference and call it a day; the AI then does the rest by paying attention to the tiny details of the face interwoven with the fluidity of body movements. The trick lies in its ability to accurately record facial cues along with spatial motions and perfectly match them onto the original image. The best part? One can simply download the revolutionary Tencent’s open-sourced HunyuanPortrait for exploration. Get ready to see the past reborn.
Tencent’s HunyuanPortrait Can Bring Still Portraits to Life
A bombshell has just been dropped by Tencent Hunyuan into the open-source world! The new HunyuanPortrait offers its potential for catering to various academic and research projects. The code can be found on Tencent GitHub pages and Hugging Face, and there is a pre-print paper about the whole methodology at arXiv. The only caveat is that the model cannot yet be used commercially.
Imagine your favorite portrait coming to life. This is what HunyuanPortrait does, morphing static images into eerily realistic animated videos. Given a reference picture and driving videos, it map out the facial movements and head positions from the video onto the still image. In return, the least movement from the real actor is seamlessly combined with the portrait, while even the slightest expression-from the very bottom of the actor’s soul-is cloned onto the digitally alive portrait.

HunyuanPortrait architecture Photo Credit: Tencent
Tencent’s talented AI engineers have breathed life into stillness through HunyuanPortrait, discussed on its model page. Think of it as an enhancement over the Stable Diffusion models! With a bit of fancy footwork involving the decoupling of motion from identity using pretrained encoders, HunyuanPortrait captures the motion signals as control signals. These control signals are then injected into the portrait via the denoising UNet, producing not only sought-after spatial correctness but also fascinating temporal smoothness–transforming static pictures into living moments.
Tencent AI threw down the gauntlet in terms of temporal consistency and control compared to open-source competitors, but do these claims really hold water? All eyes are gripping the tech realm, with the independent world awaiting to check whether this AI indeed overturns the established rules.
Gone are the days of difficult keyframing and expensive motion capture! You just need to provide the design of the characters with a name and Wuhan-based expressions, and those characters get animated. HunyuanPortrait and so forth will literally change the landscape of film making and animation. The opportunity for high-grade animation is about to be granted to smaller studios and Indie filmmakers to make great visuals that do not empty their pockets.
Thanks for reading Tencent Launches HunyuanPortrait an Open-Source AI Model for Animating Portraits