ARTIFICIAL INTELLIGENCE & PRIVACY: Privacy is not dead, it’s just complicated

“Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” ― Edward Snowden

Cloud computing, artificial intelligence, and privacy rules are the three building blocks of a new digital world order that would see humanity reach new levels of sophistication on the social, political, and economic spectrum. The long road towards such a new world order, however, is not without its pitfalls. One of the most pressing of these being the situation of severe tension and incompatibility between the right to privacy and the extensive data pooling on which efficient cloud computing and sophisticated artificial intelligence is based.

What we know is that in the last decade, data has become more and more of a commodity amongst both governments and large private institutions. So much so that they’ve employed some of the most sophisticated data mining practices to rapidly collect data about every aspect of our lives, in hopes to monopolize insights on our activities, behavior, habits, and lifestyles. Today, the symptoms of this are everywhere, from data-driven innovations such as open banking transforming the customer experience, to tech companies launching inexpensive sensors and data-collection devices to enable the constant flow of big data to central servers in which they are analyzed by “brains” or artificial intelligence algorithms for primarily: data exploitation, identification and tracking, voice & facial recognition, prediction, and profiling purposes.

No tech company has been as bullish on data-driven consumer insights as Amazon, whose inherent frivolity of user privacy has fuelled the need for privacy-centric regulation. Last week the tech giant rolled out Alexa (native personal voice assistant) everything from eyeglasses called Echo Frames -- through which Alexa can be accessed, to the Echo Loop -- a bulky looking mic’d-up ring that looks fresh off a James Bond film. Whilst, Amazon sought to address user-privacy concerns with a new feature enabling users of Alexa to delete recordings of their interaction with the voice assistant, many saw this as too little too late. And we agree.

This tussle between privacy and technological progression provided the foundation for a panel discussion at this week’s Sibos conference in London titled: ‘Cloud, AI, and privacy: Building blocks of a universal collaborative platform?’. Samik Chandarana of JP Morgan explained “Banks have to care a little bit more and have been built on a bastion of trust for many years, but you do not want your payments data to not be secure. There is a constant challenge between keeping things private and leveraging AI. Data has a different ruling in different jurisdictions, we have to play the lowest common denominator game.” Autonomous Research’s very own Pooma Kimis agreed with Chandarana, adding that companies of all types “do not have the necessary awareness to make data privacy decisions”, concluding that tech partnerships and industry-led student programs are essential to curb this.

Lastly, let’s touch on the notion of the “privacy paradox”, which refers to the discrepancy between the concept of privacy reflected in what users say (“I am very concerned for my personal privacy”) and their actual behavior (“Free mint-chip ice-cream for connecting my Facebook account to your website? Of course!”). This extends to organizations, too. The type of information we share with our bank differs from what we share with our healthcare provider, etc. you are different people to different groups. This introduces the notion of privacy control, which is initiated by user awareness. When individuals discover their data is being used in ways they did not expect, they often feel blindsided and get angry. As Jaron Lanier has observed, “Whenever something is free it means that you are the commodity that is being sold.”

And so, in this digital world, privacy -- at the conceptual level -- needs to be treated by regulators, banking & tech institutions, and governments as seriously as the right to freedom of expression. At a functional level, privacy must constantly evolve with the technology that seeks to use and/or exploit it — from the right of individuals to benefit from the commoditisation of their personal data, into a collective right of defence against AI traps, in the context of corporate (Cambridge Analytica), governmental (China’s Social Credit System), and individual exploitation (Social Blackmail).

0_zrhgaHSUopvX_Jg8.jpg
asdasdasdas.PNG
0_aUJI_tzaILrxhzwN.jpg