News -
Deepfake of Ukraine’s Zelenskyy shows that IP laws governing such tech is urgently needed
Recently, a video featuring a deepfake version of Ukrainian president Volodymyr Zelenskyy appeared on social media, asking his troops to surrender in the ongoing war with Russia.
The video was quickly identified as a fake, partly due to its poor quality. Zelenskyy also swiftly posted a video of himself exposing the deepfake, and Facebook, YouTube and Twitter announced they had removed the video in question from their platforms.
As Wired puts it: “That short-lived saga could be the first weaponized use of deepfakes during an armed conflict.”
It likely won’t be the last though. Deepfake technology, whereby artificial intelligence is used to create synthetic versions of real people based on existing images and audio of them, is getting more and more sophisticated.
And the law isn’t keeping up with the development of this technology, and its disturbing implications.
According to lawyers Carlton Daniel and Ailin O’Flaherty, the UK, for instance, has no laws that govern deepfakes specifically, and “and there is no ‘deepfake intellectual property right’ that could be invoked in a dispute”.
That means if someone’s likeness is used to create a deepfake without his permission, he would have to rely on “a hotchpotch of rights that are neither sufficient nor adequate to protect the individual in this situation”, they write.
These could include laws related to defamation, data protection, harassment, and trademark infringement. But there are significant constraints involved. For instance, the person whose likeness was replicated with deepfake technology may not own the copyright to the images of him that were used to create this deepfake.
Even if a patchwork of existing legal protections do manage to help someone in this situation, the two lawyers note that “once a deepfake is on the internet, it is likely to be difficult to successfully find and eradicate all copies of the deepfake”.
So, what’s the way forward? Legal protections specific to deepfakes are urgently needed, as are public education efforts to make people more aware of the telltale signs of such technology and its destabilising social impact when used for nefarious purposes.
Psychologist Sophie Nightingale also proposes “developing ethical guidelines for using powerful AI technology or building watermarks or embedded fingerprints into the deepfake algorithms so they’re easier to identify”. Navigating this new frontier in IP protection is critical to functional social equilibrium in an already destabilised world.
PitchMark helps innovators deter idea theft, so that clients get the idea but don’t take it. Visit PitchMark.net and register for free as a PitchMark member today.