ARTIFICIAL INTELLIGENCE: Synthesia prove that not all deep fakes are malicious, but for those that are, is Blockchain the answer to spotting them?

Last week we touched on how convolutional neural networks can be easily duped using nothing more than a computer-generated "patch" applied to a piece of cardboard (here). This week we want to keep the theme of neural networks alive, only this time addressing the fascinating topic of deep fakes. We have discussed this before (here), touching on how hyper-realistic media formats, such as images and videos, can be faked by a model where one algorithm creates images and another accepts or rejects them as sufficiently realistic, with repeated evolutionary turns at this problem. These algorithms are known as generative adversarial neural networks (GANs). Initially GANs were used in jest to make celebrities and politicians say and do things they never (here), over time, however, their sophistication has prompted more malicious use cases. Evidence of such malicious intent is reportedly coming from China in which GANs are used to manipulate satellite images of earth and/or provide strategic insight to manipulate the Chinese landscape to confuse the image processing capabilities of adversarial government GANs. Think about it, GANs, much like in our cardboard patch example, can be fooled to believe that a bridge crosses an important river at a specific point. This, from military perspective, could lead to unforeseen risk exposure to human lives, similarly so, in the context of open source data used by software to navigate autonomous vehicles across a landscape. Such malicious use cases of GANs have resulted in the concern of government entities such as The US Office of the Director of National Intelligence who explicitly noted deep fakes in the latest Threat Assessment Report (here). China has gone one step further, recently announcing a draft amendment to its Civil Code Personality Rights to reflect an outright ban on deep fake AI face swapping techniques. Currently, GANs dedicated to counteracting deep fakes are purely reactionary to those dedicated to creating them, but we are seeing novel solutions harnessing blockchain technology come from the likes of Amber - who protect the integrity of the image/video data via "fingerprinting" -- a sequenced cryptographic technique applied to bits of data associated with a single frame/image, which flags any manipulation to the original file.

But let's end this on a good note shall we. An AI-driven video production company called Synthesia used GANs to "internationalize" a message delivered by football icon David Beckham to raise awareness around the Malaria Must Die initiative. Synthesia's GANs were trained on Beckham's face so that 9 different malaria survivors could deliver their message through his avatar in their mother tongue. The resultant campaign has over 400 million impressions globally, and provides insight into the evolution of digital video marketing, corporate communications, and advertising which leverages GANs to reduce production costs and improve engagement.

tesla ridesharing app.PNG

Source: (Deep Video Portraits Report), AmberVideo, Malaria Must Die (via Youtube)