intellectual property

ARTIFICIAL INTELLIGENCE: Follow up -- Humanity fights deepfake AI algorithms with AI algorithms

Last week, we noted the terrifying reality of how artificial intelligence (AI) can now be used by malicious actors to conduct espionage. Using sophisticated AI algorithms to trick their targets into perceiving the false as real.

Don't pack for the hills just yet. What should be comforting is that the same degree of sophistication used to create deepfakes is being used to counter them. Take Adobe -- a company renowned for their photoshop product, which is often used to edit and manipulate images using their advanced software toolkit -- who is collaborating with students from UC Berkeley to develop a method for detecting edits to images in general. Initially focussing on detecting when a face has been subtly manipulated using Adobe Photoshop’s own Face Aware Liquify tool which makes it relatively easy to modify the characteristics of someone’s eyes, nose, mouth, or entire face. As with any neural network, training to move beyond this initial use-case will take time.

Decentralized public network Hadera Hashgraph has been a prominent promoter of how Distributed Ledger Technology (DLT) can play a vital role in establishing the origins of a form of media (images, video, and sound). DLTs are really good at providing an immutable and distributed timestamping service — in which any material action (an edit) conducted to a piece of media secured by the specific DLT is recorded via a timestamp. Such a timestamp could indicate an edit made by a malicious actor.

Earlier this month, saw Microsoft remove its MS Celeb database of more than 10 million images of 100,000 faces, primarily used by Chinese surveillance companies. Initially intended for academic purposes, the concern was that this database was being used to train sophisticated AI algorithms for government surveillance, as well as, deepfake applications. 

The U.S. House is currently developing the DEEPFAKES Accountability Act -- are you ready for the acronym: Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act -- which seeks to criminalize any synthetic media that fails to meet its requirements to brand it as such. The Act would enforce creators of synthetic media imitating a real person to disclose the media as altered or generated, using "irremovable digital watermarks, as well as textual descriptions" embedded in the metadata.

Within a financial context, there is no doubt that cyber crime takes the lion's share of most financial institutions -- in 2016, JPMorgan doubled its cybersecurity to $500 million, and Bank of America said it has an unlimited budget for combating cyber crime. As the threats of deepfakes become more prominent for financial institutions, should they ensure that, not only action against such an attack forms part of these budgets, but they should be actively investing in the solutions above in order to accelerate the development in the neural networks needed to form an effective defence against deepfake attacks. 


Source: Adobe Deepfake detection tool (via Engadget), Deepfake detection (via CBS News), DEEPFAKE Accountability Act

INSURANCE: Can $250 Million Get Insurtech WeFox Past Lemonade's Litigation


German insurtech startup WeFox -- backed by Ashton Kutcher and banked by Goldman Sachs -- is in the market for $250 million of fresh capital to finance international expansion. That is a meaningful amount of venture for any insurtech company, especially one that just raised its Seed round in late 2014. See the table from Coverager below for the largest raises in their database in the space -- though we would advise you to ignore Theranos. Since 2014, WeFox has changed its name from FinanceFox, acquired ONE Insurance, and intermediated deals with a number of large underwriter incumbents.

So what kind of service do you need provide to deserve a unicorn round? Well, WeFox gives customers the ability to manage all their insurance contracts across products in one place, supported by a personal agent. They act as a mobile-first broker for individuals, and provide an outsourced front office to incumbents that aggregates different insurance use-cases into a single app. The app can be free because large insurance companies pay WeFox to get clients, and then to manage those clients. Can you say B2C2B2C?

Which brings us to ONE. Whereas WeFox is the insurance supermarket, ONE is a proprietary product on that supermarket shelf. And it has just been sued by Lemonade, the radically transparent renters insurance startup, for copyright infringement and reverse engineering. Allegedly, WeFox created fake accounts and made fake claims on the Lemonade app to copy its workflow and process. And Lemonade has hired an expensive law firm -- White & Case -- to litigate. This makes us ask three questions. First, is user interface something that can be protected by copyright? There must be something deeper to this story. Second, are startup ventures now so well funded that they make worthwhile litigation targets? And third, if insurance is ripe for disruption leading to a massive market for new companies, isn't it better to spend cash on acquiring customers rather than lawyers? 


Source: Pitchbook (WeFox Raise), Coverager, (Insurtech Raises), LinkedIn (Lemonade vs One), SPGlobal (Lemonade Growth)