Say you’re a movie studio director making the next big movie with some big name celebs. Filming is in progress, and one of the actor dies in the most on brand way possible. Everyone decides that the film must be finished to honor the actor’s legacy, but how can you film someone who is dead? This technology would enable you to create footage the VFX team can use to lay over top of stand-in actor’s face and provide a better experience for your audience.
I’m sure there are other uses, but this one pops to mind as a very legitimate use case that could’ve benefited from the technology.
Sure that’s an entirely valid option; but not the one the producing team and the deceased’s family opted for… and they had a much larger say in it than you and I combined.
We’ve already recreated dead actors or older actors whole cloth with VFX. Plus it still seems like a niche use case for something that can be done by VFX artists that can also do way more
Having done something before doesn’t mean they shouldn’t find ways to make it better though. The “deepfake”-esque techniques can provide much better quality replicas. Not to mention, as resolution demand increases, it would be harder to leverage older assets and techniques to meet the new demands.
Another similar area is what LLM is doing to/for developers. We already have developers, why do we need AI to code? Well, they can help with synthesizing simpler code and freeing up devs to focus on more complicated problems. They can also democratize the ability to develop solutions to non-developers, just like how the deepfake solutions could democratize content creation for non/less-skilled VFX specialists, helping the industry create better content for everyone.
They can also democratize the ability to develop solutions to non-developers,
This is insane. If you don’t understand everything a piece of code is doing, publishing it is insanely reckless. You absolutely must know how to code to publish acceptable software.
this is so dystopian. Imagine spending your career honing your skill as an actor, dying and then having a computer replace you with just a photograph as a source. How is that honoring an actor??
An actual, practical example is generating video for VR chats like Apple has somewhat tried to do with their headset. Rather than using the cameras/sensors to generate and animate a 3d model based on you, it could do something more like this, albeit 2d.
This is slowly moving toward having Content On Demand. Imagine being able to prompt your content app for a movie/series you want to watch, and it just makes it and streams it to you.
Say you’re a movie studio director making the next big movie with some big name celebs. Filming is in progress, and one of the actor dies in the most on brand way possible. Everyone decides that the film must be finished to honor the actor’s legacy, but how can you film someone who is dead? This technology would enable you to create footage the VFX team can use to lay over top of stand-in actor’s face and provide a better experience for your audience.
I’m sure there are other uses, but this one pops to mind as a very legitimate use case that could’ve benefited from the technology.
Hot take: don’t? They’re dead, leave them dead. Rewrite and reshoot if you really have to.
Sure that’s an entirely valid option; but not the one the producing team and the deceased’s family opted for… and they had a much larger say in it than you and I combined.
We’ve already recreated dead actors or older actors whole cloth with VFX. Plus it still seems like a niche use case for something that can be done by VFX artists that can also do way more
Having done something before doesn’t mean they shouldn’t find ways to make it better though. The “deepfake”-esque techniques can provide much better quality replicas. Not to mention, as resolution demand increases, it would be harder to leverage older assets and techniques to meet the new demands.
Another similar area is what LLM is doing to/for developers. We already have developers, why do we need AI to code? Well, they can help with synthesizing simpler code and freeing up devs to focus on more complicated problems. They can also democratize the ability to develop solutions to non-developers, just like how the deepfake solutions could democratize content creation for non/less-skilled VFX specialists, helping the industry create better content for everyone.
This is insane. If you don’t understand everything a piece of code is doing, publishing it is insanely reckless. You absolutely must know how to code to publish acceptable software.
this is so dystopian. Imagine spending your career honing your skill as an actor, dying and then having a computer replace you with just a photograph as a source. How is that honoring an actor??
An actual, practical example is generating video for VR chats like Apple has somewhat tried to do with their headset. Rather than using the cameras/sensors to generate and animate a 3d model based on you, it could do something more like this, albeit 2d.
Gotta crank up that dystopia meter.
This is slowly moving toward having Content On Demand. Imagine being able to prompt your content app for a movie/series you want to watch, and it just makes it and streams it to you.