ARTIFICIAL INTELLIGENCE & PRIVACY: Privacy is not dead, it’s just complicated

“Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” ― Edward Snowden

Cloud computing, artificial intelligence, and privacy rules are the three building blocks of a new digital world order that would see humanity reach new levels of sophistication on the social, political, and economic spectrum. The long road towards such a new world order, however, is not without its pitfalls. One of the most pressing of these being the situation of severe tension and incompatibility between the right to privacy and the extensive data pooling on which efficient cloud computing and sophisticated artificial intelligence is based.

What we know is that in the last decade, data has become more and more of a commodity amongst both governments and large private institutions. So much so that they’ve employed some of the most sophisticated data mining practices to rapidly collect data about every aspect of our lives, in hopes to monopolize insights on our activities, behavior, habits, and lifestyles. Today, the symptoms of this are everywhere, from data-driven innovations such as open banking transforming the customer experience, to tech companies launching inexpensive sensors and data-collection devices to enable the constant flow of big data to central servers in which they are analyzed by “brains” or artificial intelligence algorithms for primarily: data exploitation, identification and tracking, voice & facial recognition, prediction, and profiling purposes.

No tech company has been as bullish on data-driven consumer insights as Amazon, whose inherent frivolity of user privacy has fuelled the need for privacy-centric regulation. Last week the tech giant rolled out Alexa (native personal voice assistant) everything from eyeglasses called Echo Frames -- through which Alexa can be accessed, to the Echo Loop -- a bulky looking mic’d-up ring that looks fresh off a James Bond film. Whilst, Amazon sought to address user-privacy concerns with a new feature enabling users of Alexa to delete recordings of their interaction with the voice assistant, many saw this as too little too late. And we agree.

This tussle between privacy and technological progression provided the foundation for a panel discussion at this week’s Sibos conference in London titled: ‘Cloud, AI, and privacy: Building blocks of a universal collaborative platform?’. Samik Chandarana of JP Morgan explained “Banks have to care a little bit more and have been built on a bastion of trust for many years, but you do not want your payments data to not be secure. There is a constant challenge between keeping things private and leveraging AI. Data has a different ruling in different jurisdictions, we have to play the lowest common denominator game.” Autonomous Research’s very own Pooma Kimis agreed with Chandarana, adding that companies of all types “do not have the necessary awareness to make data privacy decisions”, concluding that tech partnerships and industry-led student programs are essential to curb this.

Lastly, let’s touch on the notion of the “privacy paradox”, which refers to the discrepancy between the concept of privacy reflected in what users say (“I am very concerned for my personal privacy”) and their actual behavior (“Free mint-chip ice-cream for connecting my Facebook account to your website? Of course!”). This extends to organizations, too. The type of information we share with our bank differs from what we share with our healthcare provider, etc. you are different people to different groups. This introduces the notion of privacy control, which is initiated by user awareness. When individuals discover their data is being used in ways they did not expect, they often feel blindsided and get angry. As Jaron Lanier has observed, “Whenever something is free it means that you are the commodity that is being sold.”

And so, in this digital world, privacy -- at the conceptual level -- needs to be treated by regulators, banking & tech institutions, and governments as seriously as the right to freedom of expression. At a functional level, privacy must constantly evolve with the technology that seeks to use and/or exploit it — from the right of individuals to benefit from the commoditisation of their personal data, into a collective right of defence against AI traps, in the context of corporate (Cambridge Analytica), governmental (China’s Social Credit System), and individual exploitation (Social Blackmail).


ARTIFICIAL INTELLIGENCE: Amazon's new wearable edges us closer to a reality of emotionally manipulative financial institutions

In the past, we have touched on how a specific device that you use for conversational interface interactions will be locally better at understanding you -- rather than some giant squid-like monster AI hosted on Amazon Web Services. But, what if the conversational interface device is the friendly avatar to such a terrifying AI monster that possesses the ability to emotionally manipulate its user? Well, Isaac Asimov eat your heart out, Amazon are reportedly building an Alexa-enabled wearable that is capable of recognizing human emotions. Using an array of microphones, the wrist-worn device can collect data on the wearer's vocal patterns and use machine learning to build models discerning between states of joy, anger, sorrow, sadness, fear, disgust, boredom, and stress. As we know, Amazon are not without their fair share of data privacy concerns, with Bloomberg recently disclosing that a global team of Amazon workers were reviewing audio clips from millions of Alexa devices in an effort to enhance the capability of the assistant. Given this, we can't help but think of this as means to use the knowledge of a wearer’s emotions to recommend products or otherwise tailor responses.

Let's step back for context. Edge computing is the concept that there are lots of unique distributed smart devices scattered throughout our physical world, each needing to communicate with other humans and devices. Two layers of this are very familiar to us: (1) the phone and (2) the home. Apple has become a laggard in artificial intelligence -- behind Google on the phone, and behind Amazon and Google at home -- over the last several years. Further, when looking at core machine learning research, Facebook and Google lead the way. Google's assistant is the smartest and most adaptable, leveraging the company's expertise in search intent to divine meaning. Amazon's Alexa has a lead in physical presence, and thus customer development, as well as its attachment to voice commerce. Facebook is expert in vision and speech, owning the content channels for both (e.g., Instagram, Messenger). We also see (3) the car as developing a warzone for tech companies' data-hungry gadgets.

Looking back at financial services, it's hard to find a large financial technology provider -- save for maybe IBM -- that can compete for human attention or precision of conversation with the big tech firms (not to mention the Chinese techs). We do see many interesting symptoms, like KAI - a conversational AI platform for the finance industry used by the likes of Wells Fargo, JP Morgan, and TD Bank; but barely any compete for a relationship with a human being in their regular life. The US is fertile ground for this stuff, because a regulated moat protects financial data from the tech companies. Which is likely to keep Big Tech away from diving head first into full service banking, but with the recent launch of Apple's AppleCard we are starting to see vulnerabilities in that analogy. So how long can we rely on the narrative so eloquently put by Chris Skinner"the reason Amazon won’t get into full service banking is because dealing with technology is very different to dealing with money; furthermore, dealing with money through technology is very different to dealing with technology through money"? Also, how would you feel about your bank knowing when you are at your most vulnerable?


Source: Bloomberg Article, KAI Platform (via Kasisto)

BLOCKCHAIN: Do Criminals or Bankers want Crypto-Privacy?

Source:  ChainLink

Source: ChainLink

Ask any self-respecting financial incumbent about why public blockchains aren't good enough for enterprise use, and you get roughly the following response on why private chains (e.g., Ripple, Chain, R3, Hyperledger/IBM) are preferred. First, public blockchains don't have privacy, and large financial clients (e.g., hedge funds that do not want to reveal their trading positions) require it by definition. Second, interoperability is a problem -- financial institutions already have large enterprise technology vendors that power their complex workflows. Those workflows are the lifeblood of the middle office. One cannot just "put data on the blockchain" and disconnect the internal glue of the institution. Third, scale and speed are a problem. And last, banks are in the business of being Trusted Counterparties, not some hacker scheme like Bitcoin.

And yet when it comes to those exact same characteristics for the public blockchains, the banks assume that crypto-privacy is for criminal activity. At a recent ICO panel, we discussed whether gray market activity frequency was different on public chains vs banks. CEO of blockchain compliance company Coinfirm and former head of global AML for Royal Bank of Scotland in Europe suggested that the rates of illegal activity are similar inside of crypto and traditional finance. The only exceptions were Zcash and Monero, which are essentially impenetrable to crypto-Regtech firms.

Well, crypto-privacy is about to get another big boost. The Dandelion project could make Bitcoin transactions more anonymous. And the Metropolis upgrade for Ethereum will allow developers to leverage zero knowledge proofs, which are the cryptographic tool that make Zcash tick. Crypto-scalability is also around the corner with several projects -- LightningPlasmaRaiden -- and could get Ethereum to be competitive with Visa and Mastercard networks within a few years. On interoperability, consider Chainlink linking external data through APIs to blockchains and raising 32 million, or TenX converting any crypto-asset into purchasing power in the real economy, or the decentralized crypto-transactions that are powered by "atomic swaps". Privacy and scalability are pretty good when everything happens in a global interconnected decentralized mesh. Which leaves us the last point -- who is the Trusted Counterparty? Not banks.