BIG TECH & CYBER SECURITY: Every cloud has a surveillance lining

Let's be honest here, before the turn of the 21st century, if a stranger asked to keep our photo in exchange for a funny caricature, or a supermarket had asked to put a microphone in our homes, or a train company had asked our whereabouts in the station, or physical education teacher had asked us for our step count and sleep data every day, we would have said no. Now days, we upload multiple photos to Russian-based FaceApp, buy Amazon Alexas, use London Underground’s free WiFi, and track our activity on Garmin watches. And still manage to sleep well at night...well some of us at least. We recently learnt that both Amazonand Google admitted to having employees listen to recordings from their smart speakers. Whilst Facebook argues that its "users have no expectation of privacy" on their posts. These big US internet companies — Amazon, Apple, Facebook, Google and Microsoft — have all, to some degree, failed to protect their users' data and establish a base level of security. Controversies about how Facebook -- who received a $5B fine by the Federal Trade Commission -- shared user data with developers such as Cambridge Analytica and foreign governments earns them the lowest marks on security and data privacy, while Apple's strong emphasis on adopting considerably better policies than its more data-hungry competitors, might earn it the highest marks among the five. Other examples worth noting can be found in a previous newsletter entry here.

Relatively, there are more sophisticated means to retrieving user data without the target always being aware that it is happening. One of these was revealed by the Royal Melbourne Institute of Technology, who used various native sensors — such as the accelerometer — found in smartphones to predict the personality traits of its user. Similarly, yet more terrifying, a recent story published in the Financial Times, noted how most internet companies are equally at risk from a mobile phone spyware suite called Pegasus -- produced and sold by Israel-based “Cyber Warfare” vendor The NSO Group. The same spyware implicated in a breach of WhatsApp earlier this year. Private agencies and governments have long used Pegasus to successfully harvest private data — such as passwords, contact information, calendar events, text messages, and live calls — from the mobile phones of targeted individuals. 

Shockingly, the story focuses on the recent evolution of the spyware to infiltrate the data residing in the cloud used by the targeted individual. Such data can contain a full history of location data, archived messages and/or photos, emails, sensitive passwords, and financial records. The way it works is rather smart as it allegedly copies the authentication keys used by services such as iCloud, Google Drive, Facebook, Box, and Dropbox, among others, from a corrupted mobile phone. The keys are what these services use to verify an individual's identity, and thus provide them with access to the data on the respective cloud server. Put simply, these keys allow for an attacker to impersonate the target's phone in order to gain access to the data stored on the cloud, bypassing 2-factor authentication and login notifications. Notably, the NSO Group denies having spyware that can hack such cloud applications, services, or infrastructure.


As noted in the first entry above, the world is shifting to a more digital and decentralized form of finance and commerce, whether it be Wealthfront or Betterment roboadvisors assisting you in facilitating your wealth management, or using Robinhood's mobile app to enact stock trades. The truth is that most of this data flows through the cloud services of internet companies. And so long as hacking tools like Pegasus exist, coupled with our willingness to brazenly share our data with attention platforms, such sensitive data is subject to surveillance. But don't delete your Facebook profile just yet, as "good tech companies" — such as CrowdStrike, Cylance, and SentinelOne — are coming to our aid to fight and protect us against such cloud-native surveillance tech. Earlier this month, shares in CrowdStrike — the cyber security company that uncovered Russian hackers inside the servers of the US Democratic National Committee — jumped 97% in their trading debut on the NASDAQ, valuing the California-based cyber security group at $6.8 Billion. Since then, quarterly reports indicate revenues have risen 103% year-on-year to $96.1 Million, primarily due to the growing demand for its expertise in combating malicious cyber hacks. In any case, stay vigilant, as what we deem most crucial to our privacy in everyday life is what surveillance tech seeks to exploit (Read more here).


Source: Tom Gauld (New Scientist), CitizenLab (Hide and Seek Report), Financial Times (NSO Group Technologies), Pew Research Center (Security & Surveillance Report 2015), Pew Research Center (Americans & Cybersecurity)

ARTIFICIAL INTELLIGENCE: Follow up -- Humanity fights deepfake AI algorithms with AI algorithms

Last week, we noted the terrifying reality of how artificial intelligence (AI) can now be used by malicious actors to conduct espionage. Using sophisticated AI algorithms to trick their targets into perceiving the false as real.

Don't pack for the hills just yet. What should be comforting is that the same degree of sophistication used to create deepfakes is being used to counter them. Take Adobe -- a company renowned for their photoshop product, which is often used to edit and manipulate images using their advanced software toolkit -- who is collaborating with students from UC Berkeley to develop a method for detecting edits to images in general. Initially focussing on detecting when a face has been subtly manipulated using Adobe Photoshop’s own Face Aware Liquify tool which makes it relatively easy to modify the characteristics of someone’s eyes, nose, mouth, or entire face. As with any neural network, training to move beyond this initial use-case will take time.

Decentralized public network Hadera Hashgraph has been a prominent promoter of how Distributed Ledger Technology (DLT) can play a vital role in establishing the origins of a form of media (images, video, and sound). DLTs are really good at providing an immutable and distributed timestamping service — in which any material action (an edit) conducted to a piece of media secured by the specific DLT is recorded via a timestamp. Such a timestamp could indicate an edit made by a malicious actor.

Earlier this month, saw Microsoft remove its MS Celeb database of more than 10 million images of 100,000 faces, primarily used by Chinese surveillance companies. Initially intended for academic purposes, the concern was that this database was being used to train sophisticated AI algorithms for government surveillance, as well as, deepfake applications. 

The U.S. House is currently developing the DEEPFAKES Accountability Act -- are you ready for the acronym: Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act -- which seeks to criminalize any synthetic media that fails to meet its requirements to brand it as such. The Act would enforce creators of synthetic media imitating a real person to disclose the media as altered or generated, using "irremovable digital watermarks, as well as textual descriptions" embedded in the metadata.

Within a financial context, there is no doubt that cyber crime takes the lion's share of most financial institutions -- in 2016, JPMorgan doubled its cybersecurity to $500 million, and Bank of America said it has an unlimited budget for combating cyber crime. As the threats of deepfakes become more prominent for financial institutions, should they ensure that, not only action against such an attack forms part of these budgets, but they should be actively investing in the solutions above in order to accelerate the development in the neural networks needed to form an effective defence against deepfake attacks. 


Source: Adobe Deepfake detection tool (via Engadget), Deepfake detection (via CBS News), DEEPFAKE Accountability Act

ARTIFICIAL INTELLIGENCE: Proof that we have been training AI fakes to stab us in the back

In the 1933 film Duck Soup, actor Chico Marx is famously known to have asked, "who ya gonna believe, me or your own eyes?" Fairly meaningless in the 30s, but today, it's more relevant than ever. Let us explain. We know how the ever-expanding capacities of computing power and algorithm efficiency are leading to some pretty wacky technology in the realm of computer vision. Deepfakes are one of the more terrifying outcomes of this. A deepfake can be described as a fraudulent copy of an authentic image, video, or sound clip, which is manipulated to create an erroneous interpretation of the events captures by the authentic media format. The word 'deep' typically refers to the 'deep learning' capability of the artificially intelligent algorithm trained to manifest the most realistic version of the faked media. Real-world applications being: Former US president Barack Obama saying some outlandish things, Facebook founder Mark Zuckerberg admitting to the privacy failings of the social media platform and promoting an art installation, and Speaker of the US House of Representatives Nancy Pelosi made to look incompetent and unfit for office.

Videos like these aren’t proof, of course, that deepfakes are going to destroy our notion of truth and evidence. But it does show that these concerns are not just theoretical, and that this technology — like any other — is slowly going to be adapted by malicious actors. Put another way, we usually tend to think that perception — the evidence of your senses (sight, smell, taste etc.) — provides pretty strong justification of reality. If something is seen with our own eyes, we normally tend to believe it i.e., a photograph. By comparison, third-party claims of senses — which philosophers call “testimony” — provide some justification, but sometimes not quite as much as perception i.e. a painting of a scene. In reality, we know your senses can be deceptive, but that’s less likely than other people (malicious actors) deceiving you.

What we saw last week took this to a whole new level. A potential spy has infiltrated some significant Washington-based political networks found on social network LinkedIn, using an AI-generated profile picture to fool existing members of these networks. Katie Jones was the alias used to connect with a number of policy experts, including a US senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Although there's evidence to suggest that LinkedIn has been a hotbed for large-scale low-risk espionage by the Chinese government, this instance is unique because a generative adversarial network (GAN) -- an AI method popularized by websites like -- was used to create the account's fake picture.

Here's the kicker, these GANs are trained by the mundane administrative tasks we all participate in when using the internet on a day-to-day basis. Don't believe us? Take Google’s human verification service “Captcha” – more often than not you’ve completed one of these at some point. The purpose of these go beyond proving you are not a piece of software that is unable to recognise all the shopfronts in 9 images. For instance: being asked to type out a blurry word could help Googlebooks’ search function with real text in uploaded books, or rewriting skewed numbers could help train Googlestreetview to know the numbers on houses for Googlemaps, or lastly, selecting all the images that have a car in them could train google’s self-driving car company Waymo improve its algorithm to prevent accidents.

The buck doesn't stop with Google either, human-assisted AI is explicitly the modus operandi at Amazon’s Mechanical Turk (MTurk) platform, which rewards humans for assisting with tasks beyond the capability of certain AI algorithms, such as highlighting key words in an email, or rewriting difficult to read numbers from photographs. The name Mechanical Turk stems from an 18th century "automaton" or self-playing master chess player, in fact it was a mechanical illusion using a human buried under the desk of the machine to operate the arms. Clever huh?!

Ever since the financial crisis of 2008, all activity within a regulated financial institution must meet the strict compliance and ethics standards enforced by the regulator of that jurisdiction. To imagine that a tool like LinkedIn with over 500 million members can be used by malicious actors to solicit insider information, or be used as a tool for corporate espionage, should be of grave concern to all financial institutions big and small. What's worse is that neither the actors, nor the AI behind these LinkedIn profiles can be traced and prosecuted for such illicit activity, especially when private or government institutions are able to launch thousands at a time. 


Source: Nancy Pelosi video (via Youtube), Spy AI (via Associated Press), Google Captcha (via Aalto Blogs), Amazon MTurk

CRYPTO: Why Bitcoin Falls Down

Remember the mantra. Tech innovations swing between the extremes of meme and electricity. Memes are human sentiment, the animal spirits of the market shooting up and crashing down. Yahoo message boards, Reddit posts, Telegram communities, excited media articles. Electricity, however, is real. It's discovery and taming led to an industrial revolution, light and progress. Today's laundromats might be boring and tame, but imagine the first robotic clothes washer animated by electric powers unseen. All tech innovations have a bit of each. Crypto is enjoying its meme moment. Why is Bitcoin going down, after it went up? Let's talk about the factors that are adding up to the current sentiment.

(1) The first is definitional -- Bitcoin (and all crypto) is a volatile early stage technology asset and these massive run-ups and falls are a feature of the asset class, not an exception.

(2) The second is that data points about hacks and Ponzi schemes have been dominating the news. From Tether (which may be trying to print billions of sovereign currency) to Bitconnect (likely Ponzi scheme with a proprietary coin falling from $2.6 billion in marketcap)  the Coincheck hack ($500 million Japanese exchange hack), to Arise Bank ($600 million ICO shutdown by the SEC), billions of USD equivalent value keep are literally evaporating from the crypto economy due to bad actors. These issues are not new in the space, but now there is mainstream attention with nearly at trillion at stake, and the regulators are starting in enforcement actions.

(3) The futures market that so many crypto natives were excited about allow professional investors to actually take a bearish view. Oops. This sentiment should reflect back into the price mechanically.

(4) Decentralized systems will supposedly erode the control of centralized systems. So we should not be surprised when centralized systems fight back when coopted for this purpose -- from Facebook's Bitcoin ad block and regulator crackdown on fake bots, to the refusal of credit card issuers and banks to keep financing crypto purchases, to asset managers like Vanguard announcing they won't create vehicles for the asset class.

None of this should be new information. If in 2002 you asked the music labels whether they like Napster, not only would they answer with a resounding NO, but they would talk about Digital Rights Management and all their plans to fight back. Welcome to creating product-market fit.

CRYPTO: Hackery Hacker Hacks


So how likely are you to get hacked and lose all your magic crypto beans? If we believe this list, over 20 exchanges have gotten hacked. In total, there are probably 125-250 exchanges (data point 12). So that would suggest that over a 4 year period, 5-10% of all exchanges have been compromised in some way. We also looked at Bitcoin and Ethereum hacks that are in the public domain and added up the USD impact as of the time of the hack. We then also took that USD value as a percentage of the outstanding Bitcoin and Ethereum market capitalizations at the time to arrive at percentage of funds that were hacked per year.
2014 was Mt Gox and 2016 was the DAO, thus the big outlier numbers in those years. 2017 saw more regular smaller events consistently tied to ICOs. Outside of programming errors, exchange servers hacks, and attacks on wallets, human behavioral hacking increased. Think about ransomware or phishing on social media. If you're interested in more granular data along these lines, see Chainalysis. The good news is that as the overall marketcap grew, these losses became smaller as a percentage of the whole. Going forward, we would expect 50 to 300 bps of the market capitalization of cryptocurrency to be at risk for loss from hacking or other cybersecurity failures. Or alternately, it looks like crypto hacking is a $200 million annual revenue industry.


Can decentralized exchanges built into software, liberated from centralized servers to be their full capitalist selves, solve this problem? See Airswap, 0x. In theory, decentralized exchanges and atomic swaps should be more secure than centralized exchanges, which hold the keys for millions of user accounts on their servers. Decentralized exchanges are also much harder to shut down, as there should be no particular centralized counterparty once a project is off the ground. Think Bittorrent, rather than Napster. Napster was shut down, Bittorrent has spread all over the web and cannot be stamped out. But, decentralized exchanges face the same issue as the DAO. Bugs in the smart contract code itself, rather than in the security infrastructure, could lead to a smart hacker finding a way to trick the contract. Also decentralized exchanges may not be as liquid as centralized ones, something that is still being worked out.

SOCIAL MEDIA: World's Largest Botnet Born from Minecraft

Source: Minecraft

Source: Minecraft

This is a lego piece for the future. On the Internet (we're there right now!), a distributed denial-of-service attack ("DDoS") is when a group of computers access a server so many times that traffic spikes and the server crashes, taking down whatever it is hosting. So for example, if you don't like the NY Times, just overwhelm it with robots and bring the site offline. These robots, collectively a botnet, don't have to be particularly good computers -- one could for example hack into thousands of baby monitors over WiFi and then point them at a target.

In 2016, a tremendously powerful botnet attacked the internet infrastructure of the United States, like never before. It used 600,000 Internet of Things devices. Where did this weapon come from? The answer is the video game Minecraft. In 2014, the virtual sandbox had 100 million registered players and a GDP of $400 million. Part of these economics is hosting Minecraft servers for local communities, and the corrollary of that is that executing a DDoS attack against a competitor makes you a modern-day Minecraft mafia monopoly. The 21-year old creators of this infamous botnet built it to snipe out other video game tycoons and make more money on their Minecraft servers. Later, they used the same botnet to defraud advertisers (selling hundreds of thousands of clicks and traffic that came from robots, not humans).

At some point, the creators open sourced the software and it spread through the dark web. That means any black hat hacker can get the code, change it up, and try to create its own infection of IoT devices. We know that, for example, North Korea is pretty good at cyber attacks and is now hacking crypto currency infrastructure. The links between 21-year old computer savants, video games, Internet money, and international geopolitical power struggles are here to stay. Which world is more powerful?

CRYPTO: We Need Real Crypto Custody

Source: Coinbase

Source: Coinbase

Sure, the crypto economy has valuable infrastructure innovation that will change the world. But "code is law" is just not enough, because code is full of bugs and humans don't know what they want. The finance people are right about at least one thing. And that thing is custody.

In today's world, owning Bitcoin or Ethereum means learning a mish-mash of technical information while risking accidentally losing all your money. And if you don't lose your money through technical error, or the endless ICO phishing scams, there's a good chance something else can go wrong. We know of the hack last year that pulled $150 million from the DAO project on Ethereum, which was reversed through the hard fork but to the creation of Ethereum Classic -- $1.7 billion value out of the ecosystem. Another $160 million just got flushed down the drain, with users locked out of their money permanently due to a mistake in the fix of a previous $30 million hack of the Parity wallet for the cryptocurrency.

We can keep saying that there's nothing wrong with the blockchain technology, and it is the infrastructure providers like the Parity wallet, or the Mt Gox exchange, or the smart contract writers for the DAO that made the mistake. But that is a cop out. Users shouldn't care about why they lost money, if it happens to them by no reasonable fault of their own. The answer is to build safe storage of these assets up to the standards of the traditional financial economy. Sure, we may lose some crypto anarchists in the process to Monero and Zcash, but we will gain the global economy. The good news is that this is indeed in progress. Coinbase plans to offer institutional custody to crypto funds starting at a $100k fee (ouch!). And see Alex Batlin leaving BNY Mellon to start Trustology at Consensys, delivering crypto custody as a service. This is what needs to be finished before we invent the rest.