ARTIFICIAL INTELLIGENCE: Proof that we have been training AI fakes to stab us in the back

In the 1933 film Duck Soup, actor Chico Marx is famously known to have asked, "who ya gonna believe, me or your own eyes?" Fairly meaningless in the 30s, but today, it's more relevant than ever. Let us explain. We know how the ever-expanding capacities of computing power and algorithm efficiency are leading to some pretty wacky technology in the realm of computer vision. Deepfakes are one of the more terrifying outcomes of this. A deepfake can be described as a fraudulent copy of an authentic image, video, or sound clip, which is manipulated to create an erroneous interpretation of the events captures by the authentic media format. The word 'deep' typically refers to the 'deep learning' capability of the artificially intelligent algorithm trained to manifest the most realistic version of the faked media. Real-world applications being: Former US president Barack Obama saying some outlandish things, Facebook founder Mark Zuckerberg admitting to the privacy failings of the social media platform and promoting an art installation, and Speaker of the US House of Representatives Nancy Pelosi made to look incompetent and unfit for office.

Videos like these aren’t proof, of course, that deepfakes are going to destroy our notion of truth and evidence. But it does show that these concerns are not just theoretical, and that this technology — like any other — is slowly going to be adapted by malicious actors. Put another way, we usually tend to think that perception — the evidence of your senses (sight, smell, taste etc.) — provides pretty strong justification of reality. If something is seen with our own eyes, we normally tend to believe it i.e., a photograph. By comparison, third-party claims of senses — which philosophers call “testimony” — provide some justification, but sometimes not quite as much as perception i.e. a painting of a scene. In reality, we know your senses can be deceptive, but that’s less likely than other people (malicious actors) deceiving you.

What we saw last week took this to a whole new level. A potential spy has infiltrated some significant Washington-based political networks found on social network LinkedIn, using an AI-generated profile picture to fool existing members of these networks. Katie Jones was the alias used to connect with a number of policy experts, including a US senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Although there's evidence to suggest that LinkedIn has been a hotbed for large-scale low-risk espionage by the Chinese government, this instance is unique because a generative adversarial network (GAN) -- an AI method popularized by websites like ThisPersonDoesNotExist.com -- was used to create the account's fake picture.

Here's the kicker, these GANs are trained by the mundane administrative tasks we all participate in when using the internet on a day-to-day basis. Don't believe us? Take Google’s human verification service “Captcha” – more often than not you’ve completed one of these at some point. The purpose of these go beyond proving you are not a piece of software that is unable to recognise all the shopfronts in 9 images. For instance: being asked to type out a blurry word could help Googlebooks’ search function with real text in uploaded books, or rewriting skewed numbers could help train Googlestreetview to know the numbers on houses for Googlemaps, or lastly, selecting all the images that have a car in them could train google’s self-driving car company Waymo improve its algorithm to prevent accidents.

The buck doesn't stop with Google either, human-assisted AI is explicitly the modus operandi at Amazon’s Mechanical Turk (MTurk) platform, which rewards humans for assisting with tasks beyond the capability of certain AI algorithms, such as highlighting key words in an email, or rewriting difficult to read numbers from photographs. The name Mechanical Turk stems from an 18th century "automaton" or self-playing master chess player, in fact it was a mechanical illusion using a human buried under the desk of the machine to operate the arms. Clever huh?!

Ever since the financial crisis of 2008, all activity within a regulated financial institution must meet the strict compliance and ethics standards enforced by the regulator of that jurisdiction. To imagine that a tool like LinkedIn with over 500 million members can be used by malicious actors to solicit insider information, or be used as a tool for corporate espionage, should be of grave concern to all financial institutions big and small. What's worse is that neither the actors, nor the AI behind these LinkedIn profiles can be traced and prosecuted for such illicit activity, especially when private or government institutions are able to launch thousands at a time. 

maxresdefault.jpg
54646.PNG
800.jpeg
captcha_examples.jpg
amazon-mechanical-turk-website-screenshot.png

Source: Nancy Pelosi video (via Youtube), Spy AI (via Associated Press), Google Captcha (via Aalto Blogs), Amazon MTurk