ARTIFICIAL INTELLIGENCE: Follow up -- Humanity fights deepfake AI algorithms with AI algorithms

Last week, we noted the terrifying reality of how artificial intelligence (AI) can now be used by malicious actors to conduct espionage. Using sophisticated AI algorithms to trick their targets into perceiving the false as real.

Don't pack for the hills just yet. What should be comforting is that the same degree of sophistication used to create deepfakes is being used to counter them. Take Adobe -- a company renowned for their photoshop product, which is often used to edit and manipulate images using their advanced software toolkit -- who is collaborating with students from UC Berkeley to develop a method for detecting edits to images in general. Initially focussing on detecting when a face has been subtly manipulated using Adobe Photoshop’s own Face Aware Liquify tool which makes it relatively easy to modify the characteristics of someone’s eyes, nose, mouth, or entire face. As with any neural network, training to move beyond this initial use-case will take time.

Decentralized public network Hadera Hashgraph has been a prominent promoter of how Distributed Ledger Technology (DLT) can play a vital role in establishing the origins of a form of media (images, video, and sound). DLTs are really good at providing an immutable and distributed timestamping service — in which any material action (an edit) conducted to a piece of media secured by the specific DLT is recorded via a timestamp. Such a timestamp could indicate an edit made by a malicious actor.

Earlier this month, saw Microsoft remove its MS Celeb database of more than 10 million images of 100,000 faces, primarily used by Chinese surveillance companies. Initially intended for academic purposes, the concern was that this database was being used to train sophisticated AI algorithms for government surveillance, as well as, deepfake applications. 

The U.S. House is currently developing the DEEPFAKES Accountability Act -- are you ready for the acronym: Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act -- which seeks to criminalize any synthetic media that fails to meet its requirements to brand it as such. The Act would enforce creators of synthetic media imitating a real person to disclose the media as altered or generated, using "irremovable digital watermarks, as well as textual descriptions" embedded in the metadata.

Within a financial context, there is no doubt that cyber crime takes the lion's share of most financial institutions -- in 2016, JPMorgan doubled its cybersecurity to $500 million, and Bank of America said it has an unlimited budget for combating cyber crime. As the threats of deepfakes become more prominent for financial institutions, should they ensure that, not only action against such an attack forms part of these budgets, but they should be actively investing in the solutions above in order to accelerate the development in the neural networks needed to form an effective defence against deepfake attacks. 

9c7e0e69-cbc4-4d7b-bbc2-5a990a834450.jpeg
626e6034-e678-46c3-ab30-b301b2f176d1.jpg
a080db93-245a-4776-a060-d9064b944b56.png

Source: Adobe Deepfake detection tool (via Engadget), Deepfake detection (via CBS News), DEEPFAKE Accountability Act