You can retarget lip sync on videos with a reference video, animate a face or retarget the view of a photo.
I did my experiments on this face donor man time ago. On a previous thread i made a musical video of him, now with this technology he sings more naturally.
This is the old video of him. I animate him with crazy talk that time.
Now with this tool I reanimate the old footage of him.
With this refence video.
With the new AI tools that comes recently . I reworked the face of him, because in the first test the tools are in early stages.
I open his eyes.
Move the face to a frontal view.
And now with the recently AI tools (fooocus) i put him a nice hairstyle, a nice shirt and nice ear plug piercing to look a litlle rocker to make him sing.
And this is the new video of him. i like it very much the result. What do you think??
The previous post was made using a photo as a source, but you can use videos too as a source of your new deepfake videos. Also thre is a retargeting of the lips on the source video to make the person speak or sing more naturally.
This is my test with Elon Musk making sing numa numa song. This is the original video as a source. I add extra footage as rewinded video to fit the song lenght.