artificial intelligence

INSURTECH: Breaking down how technology seeks to transform the $5 Trillion Insurance industry

When it comes to insurance, the $5 Trillion global industry is often deemed to be a slow-moving conservative sector resistant to change. Innovation is thought to be achieved by merely repackaging existing products into flavorsome marketing wrappers with re-bundled cost structures. Breaking down the variables that impact the need for change such as changing demographics and consumer behavior, enhanced connectedness through digital mediums, the emergence of the shared economy, and shift from asset ownership into renting or fractal ownership, we see a profound effect on the sector as a whole. These variables are enabled by the likes of artificial intelligence (AI) applications, internet of things (IoT) ecosystems, and decentralized ledger technologies (DLT) which help accelerate the insurance sector to respond to new trends, the streamlining of operations, reduction in costs, creation of new revenue models and evolution in innovative products and solutions across the value chain. Let's take a look at some examples.

The core to any insurance product is the back-office process of underwriting, which is leveraging AI to extract insights from various data sources, using IoT devices as the collection mediums, and cloud infrastructure to instantaneously update data to models used to improve risk profiling and thus pricing. US-based Flyreel developed an AI-enabled underwriting system replacing the need for professional insurance inspections. It achieves this via an app on a mobile device which is used to scan a property. The image content is then run through computer vision algorithms to automatically identify items relevant to the customer's policy, enabling property owners to improve underwriting efficiency. Very cool.

The reduction in fraudulent claims losses -- estimated to run the US $80 billion per year -- and improving claim settlement efficiency are crucial areas in which technology is sought to address. Inscribe.ai is a San Francisco-based fraud documents detection platform that uses a combination of natural language processing (NLP) and computer vision to scan documents to identify fraudulent claims. In terms of improving claim settlement efficiency, State Farm is testing a permissioned DLT powered by smart contracts in auto claims subrogation -- a process by which insurers settle claims losses amongst each other -- to significantly speed up the process with immediate automatic payment disbursement as soon as liability determination is completed.

Lastly, a combination of DLTs, advanced driver-assisted systems (ADAS) or other telematics installed in consumer's vehicles to collect real-time data on driver behavior and driving patterns have been essential to create more accurate real-time dynamic risk assessments and pricing models. These include pay-as-you-drive (PAYD), pay-how-you-drive (PHYD), and on-demand just-in-time insurance pricing models, spearheaded by insurers such as Cuvva, Trōv, Metromile, Insure the box, Root Insurance. On the topic of auto insurance we would be remiss if we ignored self-driving cars. Although the argument is that such vehicles will potentially reduce insurance premiums by 85-90%, new risks such as software and hardware failure, as well as cyber attack will play a major part in formulation of new premiums. Needless to say that startups such as Avinew are already offering policies to cover semi autonomous vehicles using telematics, AI, and machine learning to help build comprehensive risk assessments and policy pricing models.

It is without a doubt that not that far in the future, we will see the emergence of decentralized autonomous insurance organisations that will leverage IoT, AI, and DLTs to enable Peer-To-Peer (P2P) insurance and eliminate the need for middle men. We will see a state of the industry in which customer engagement, policy underwriting, claim filing, inspection, claim settlement, payments are customized and fully automated. And we cannot wait.

4b3f888b-1ab7-461b-b8cd-7e2785cc67e1.png
96f54cae-2d9f-4e44-8224-5c62081a28ea.png
098c2a06-ab14-4def-83f2-77f9b5b1df3e.png
8c2727b7-a75c-4916-a6d7-a8027b59ccf9.png
1dac2616-4ae1-4359-b95d-18695b4de703.png

Source: Flyreel, Forbes (State Farm And USAA See Stark Increase In Efficiency When Testing Blockchain Subrogation), Trōv, Insure the box, Avinew

ARTIFICIAL INTELLIGENCE: When it comes to Automation, executives get their priorities straight

It takes two to tango, and as per a report published on The State of AI and Machine Learning suggests, 82% of technical practitioners and 94% line-of-business owners believe that both humans and machines will collaborate in the future, rather than one dominate the other. The concept of such an idealistic future should instill some comfort in those whose jobs are directly threatened by automation i.e. an average of 63% of front office employees across banking, investment, and insurance industries. However, the journey to get there will be messy. With no greater example of this than Deutsche Bank's 18,000 workforce cut, complimented by a $14.5 billion IT budget injection by 2022. As noted in a recent blog entry, Deutsche's move can be viewed as the former of two possible outcomes of automation: "(1) remove $1 billion of cost by slashing your team, or (2) make your team $1 billion more productive". Amazon is undertaking an effort towards the latter outcome by spending $700 million on up-skilling its workers.

This raises an interesting point -- automation does not directly drive the loss of jobs, the priorities of c-suite executives does. In a briefing paper by The Economist on The Advance of Automation, less than half (47%) of all executive respondents strongly agree that automation is most effective when it complements humans, not replaces them. Whilst 57% believe automation will change the skills and requirements the workforce needs. Put simply, there seems to be no true preference between the two outcomes amongst the 500+ surveyed executives. Additionally, only 18% saw automation free up employees to take on higher-level roles, and 17% saw enhanced employee engagement and experience. But, is it too early to truly lean on these statistics?

Lastly, this week saw JP Morgan roll out a new digital investment service i.e. roboadvisor called 'You Invest' via the Chase mobile app. The service will target younger clients with as little as $2,500 to invest across a mix of JPMorgan ETFs, costing 35 basis points per annum. Similarly, an ex-Coutts banker has launched a digital wealth management platform called Rosecut Technologies. Combining artificial intelligence and human advice to provide a bespoke investment solution aimed towards high-net-worth clients. What we are seeing here is more evidence of automation not being the culprit behind looming job cuts. Rather its the B2B consultants promising automation solutions to executives, the pitched cost benefit of replacing workers with algorithms, and the prioritization of lean machine-driven profits, that are the true culprits. 

213230.PNG
sdasdasdsadsa.PNG

Source: Figure Eight (The State if AI and Machine Learning Report), The Economist Intelligence Unit (The advance of automation Report)

ROBOADVISORS & DIGITAL WEALTH: Artificial intelligence battles in financial markets but conquers in cryptocurrencies

It has become commonplace for users of online platforms to expect that their attention i.e. time spent using the platform, converts to loyalty -- in the form of an artificial intelligence algorithm that knows them better over time e.g. auto-populating search fields, recommending preferred clothes to wear, books to read, or food to eat. Yet, when it comes to applying such sophisticated algorithms to financial markets, why aren't such quant funds always outperforming the market?

Artificial Intelligence is most useful where the problem set is narrowly defined, i.e., it is well known what is being optimized and how, and where the fuzzy data needs the structuring at scale that AI provides. A narrowly defined problem may be – given this particular set of personal characteristics about a person, should they be allowed to borrow this particular amount of money based on prior examples. A poorly defined problem may be – predict the price of a stock tomorrow given thousands of inter-correlated data points and their price history. It all boils down to the reliance of quant investment strategies reliance on pattern recognition: models look to correlate past periods of superior returns with specific factors including value, size, volatility, yield, quality and momentum. Such approaches have several fundamental weaknesses: (1) hindsight bias — the belief that understanding the past allows the future to be predicted, (2) ergodicity -- the lack of a truly representative data sample used in the model, and (3) overfitting --  when a model tries to predict a trend in data that is too noisy i.e. too many parameters or factors. Logically, over time the anomalies that these quant strategies are relied upon to exploit should dissipate, given the swift pace at which technology, competitors, and data moves to correct such anomalies. This is not stopping the likes of augmented analyst platform Kensho (acquired by S&P Global for $550 million), crowdsourced machine learning hedge fund Numerai, and the industry-leading quantamental funds of BlackRock. There is an inherent contradiction in that the approach exploits inefficiencies, but requires market efficiency to realign prices to generate returns.

With Cryptocurrencies, the strategies are different. Native Cryptocurrencies i.e. Ether and Bitcoin, are considered unconstrained assets, with limited correlations to other assets. Additionally, the data sets and factors that need to be considered when trading Cryptocurrencies are far fewer — many of which are speculative and co-dependent, resulting in far more predictable patterns than in financial markets. Because most of Cryptocurrency trading is autonomously and algorithmically driven, patterns are more easily discernible and human trading behavior often sticks out in stark contrast to established market behavior.The issue of course is not the opportunity to profit — it’s the magnitude of such profits. Currently, Cryptocurrencies simply do not have the volume and liquidity necessary for autonomous trading strategies to be deployed in large quantums. Percentage returns for algorithmic Cryptocurrency trading may be significant, but beyond certain volumes, especially when assets under management start approaching the hundreds of millions of dollars, traders need to get far more creative and circumspect in deploying funds as the opportunities are far fewer at larger order sizes.

For now at least, AI and machine learning are still some ways away from consistently beating the financial markets, but with a bit of tweaking they may be a lot closer to beating the Cryptocurrency markets. Evidence of this is already beginning to show -- in 2018 Swiss asset manager GAM's Systematic Cantab quant fund lost 23.1 percent, as well as, Neuberger Berman is considering closing their factor investing quant fund over poor performance. All this whilst Cryptocurrency quant funds returned on average 8% over the same period. While the prospect of searching for phantom signals that eventually disappear could dissuade some people from working in finance or Cryptocurrency trading — the lure of solving tough problems coupled with the potential to dip into the $200 billion opportunity means that there will always be more than enough people who will try.

8498797.PNG
fdsfdf.PNG
ssadasd.PNG
safdasd.PNG

Source: Autonomous NEXT Keystone Deck (Augmented Commerce), PWC (2019 Crypto Hedge Fund Report)

ARTIFICIAL INTELLIGENCE: Follow up -- Humanity fights deepfake AI algorithms with AI algorithms

Last week, we noted the terrifying reality of how artificial intelligence (AI) can now be used by malicious actors to conduct espionage. Using sophisticated AI algorithms to trick their targets into perceiving the false as real.

Don't pack for the hills just yet. What should be comforting is that the same degree of sophistication used to create deepfakes is being used to counter them. Take Adobe -- a company renowned for their photoshop product, which is often used to edit and manipulate images using their advanced software toolkit -- who is collaborating with students from UC Berkeley to develop a method for detecting edits to images in general. Initially focussing on detecting when a face has been subtly manipulated using Adobe Photoshop’s own Face Aware Liquify tool which makes it relatively easy to modify the characteristics of someone’s eyes, nose, mouth, or entire face. As with any neural network, training to move beyond this initial use-case will take time.

Decentralized public network Hadera Hashgraph has been a prominent promoter of how Distributed Ledger Technology (DLT) can play a vital role in establishing the origins of a form of media (images, video, and sound). DLTs are really good at providing an immutable and distributed timestamping service — in which any material action (an edit) conducted to a piece of media secured by the specific DLT is recorded via a timestamp. Such a timestamp could indicate an edit made by a malicious actor.

Earlier this month, saw Microsoft remove its MS Celeb database of more than 10 million images of 100,000 faces, primarily used by Chinese surveillance companies. Initially intended for academic purposes, the concern was that this database was being used to train sophisticated AI algorithms for government surveillance, as well as, deepfake applications. 

The U.S. House is currently developing the DEEPFAKES Accountability Act -- are you ready for the acronym: Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act -- which seeks to criminalize any synthetic media that fails to meet its requirements to brand it as such. The Act would enforce creators of synthetic media imitating a real person to disclose the media as altered or generated, using "irremovable digital watermarks, as well as textual descriptions" embedded in the metadata.

Within a financial context, there is no doubt that cyber crime takes the lion's share of most financial institutions -- in 2016, JPMorgan doubled its cybersecurity to $500 million, and Bank of America said it has an unlimited budget for combating cyber crime. As the threats of deepfakes become more prominent for financial institutions, should they ensure that, not only action against such an attack forms part of these budgets, but they should be actively investing in the solutions above in order to accelerate the development in the neural networks needed to form an effective defence against deepfake attacks. 

9c7e0e69-cbc4-4d7b-bbc2-5a990a834450.jpeg
626e6034-e678-46c3-ab30-b301b2f176d1.jpg
a080db93-245a-4776-a060-d9064b944b56.png

Source: Adobe Deepfake detection tool (via Engadget), Deepfake detection (via CBS News), DEEPFAKE Accountability Act

ARTIFICIAL INTELLIGENCE: Proof that we have been training AI fakes to stab us in the back

In the 1933 film Duck Soup, actor Chico Marx is famously known to have asked, "who ya gonna believe, me or your own eyes?" Fairly meaningless in the 30s, but today, it's more relevant than ever. Let us explain. We know how the ever-expanding capacities of computing power and algorithm efficiency are leading to some pretty wacky technology in the realm of computer vision. Deepfakes are one of the more terrifying outcomes of this. A deepfake can be described as a fraudulent copy of an authentic image, video, or sound clip, which is manipulated to create an erroneous interpretation of the events captures by the authentic media format. The word 'deep' typically refers to the 'deep learning' capability of the artificially intelligent algorithm trained to manifest the most realistic version of the faked media. Real-world applications being: Former US president Barack Obama saying some outlandish things, Facebook founder Mark Zuckerberg admitting to the privacy failings of the social media platform and promoting an art installation, and Speaker of the US House of Representatives Nancy Pelosi made to look incompetent and unfit for office.

Videos like these aren’t proof, of course, that deepfakes are going to destroy our notion of truth and evidence. But it does show that these concerns are not just theoretical, and that this technology — like any other — is slowly going to be adapted by malicious actors. Put another way, we usually tend to think that perception — the evidence of your senses (sight, smell, taste etc.) — provides pretty strong justification of reality. If something is seen with our own eyes, we normally tend to believe it i.e., a photograph. By comparison, third-party claims of senses — which philosophers call “testimony” — provide some justification, but sometimes not quite as much as perception i.e. a painting of a scene. In reality, we know your senses can be deceptive, but that’s less likely than other people (malicious actors) deceiving you.

What we saw last week took this to a whole new level. A potential spy has infiltrated some significant Washington-based political networks found on social network LinkedIn, using an AI-generated profile picture to fool existing members of these networks. Katie Jones was the alias used to connect with a number of policy experts, including a US senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Although there's evidence to suggest that LinkedIn has been a hotbed for large-scale low-risk espionage by the Chinese government, this instance is unique because a generative adversarial network (GAN) -- an AI method popularized by websites like ThisPersonDoesNotExist.com -- was used to create the account's fake picture.

Here's the kicker, these GANs are trained by the mundane administrative tasks we all participate in when using the internet on a day-to-day basis. Don't believe us? Take Google’s human verification service “Captcha” – more often than not you’ve completed one of these at some point. The purpose of these go beyond proving you are not a piece of software that is unable to recognise all the shopfronts in 9 images. For instance: being asked to type out a blurry word could help Googlebooks’ search function with real text in uploaded books, or rewriting skewed numbers could help train Googlestreetview to know the numbers on houses for Googlemaps, or lastly, selecting all the images that have a car in them could train google’s self-driving car company Waymo improve its algorithm to prevent accidents.

The buck doesn't stop with Google either, human-assisted AI is explicitly the modus operandi at Amazon’s Mechanical Turk (MTurk) platform, which rewards humans for assisting with tasks beyond the capability of certain AI algorithms, such as highlighting key words in an email, or rewriting difficult to read numbers from photographs. The name Mechanical Turk stems from an 18th century "automaton" or self-playing master chess player, in fact it was a mechanical illusion using a human buried under the desk of the machine to operate the arms. Clever huh?!

Ever since the financial crisis of 2008, all activity within a regulated financial institution must meet the strict compliance and ethics standards enforced by the regulator of that jurisdiction. To imagine that a tool like LinkedIn with over 500 million members can be used by malicious actors to solicit insider information, or be used as a tool for corporate espionage, should be of grave concern to all financial institutions big and small. What's worse is that neither the actors, nor the AI behind these LinkedIn profiles can be traced and prosecuted for such illicit activity, especially when private or government institutions are able to launch thousands at a time. 

maxresdefault.jpg
54646.PNG
800.jpeg
captcha_examples.jpg
amazon-mechanical-turk-website-screenshot.png

Source: Nancy Pelosi video (via Youtube), Spy AI (via Associated Press), Google Captcha (via Aalto Blogs), Amazon MTurk

ROBOADVISORS & INVESTING: Robinhood's latest $8bn valuation means that scale players need to wake up

There’s no such thing as a free lunch in life, but there are such things as free trades on Robinhood. What Chime did with banking, Robinhood has done with trading. Their massive 4 million active user base is enviable to every other Fintech. So then it's no surprise that the firm is estimated to be valued at $7-8 billion, following a $200 million fund raise with existing investors. Founded in 2013 by two former Stanford University roommates, Baiju Bhatt and Vlad Tenev, with the goal of  building a brokerage service that democratized access to the financial system -- specifically, stock trading and its significant barriers to entry (costs, fees, and minimum capital requirements). Since it's launch, millennial investors -- an elusive audience to traditional financial services firms -- have flocked to the service to trade stocks, options, cryptocurrencies and exchange-traded funds, at low-to-no fees.

Such success stems from the app's ability to earn fees via indirect channels such as marginal interest, lending, a $6 per month premium product called Robinhood Gold -- offering up to $1,000 of margin to trade with, and lastly, rebates from high-frequency trading and payment order flow. Here, third-party market makers, such as Citadel Securities, Two Sigma, and Virtu, pay Robinhood a rebate for processing trades on the app's behalf, apparently to offer better execution quality and prices. Whilst that sounds noble, it must not be forgotten that such a non-transparent practice -- as noted by CNBC -- could encourage brokers to send orders to market makers that offer the most generous rebates, and not necessarily the ones who offer the best prices for stocks. However, this is likely not to be the case as Robinhood's leadership has stressed that "we don’t take rebates into consideration when we choose which market maker will execute your orders. Also, all market makers with whom we work have the same rebate rate". Last year Bloomberg reported that Robinhood made in excess of 40 percent ($69 million) of its 2018 revenue from payment order flow.

Additionally, Robinhood is planning a U.K. launch to muscle-up against the likes of challenger broker Freetrade -- a London-based twin of Robinhood, and challenger bank Revolut -- who has indicated its intention to offer a free trading platform in the near future. The interesting aspect here is that Robinhood has been desperate to become a full-service bank, with evidence of this coming from last year when the company ended up with egg on its face after announcing its intentions to launch savings and checking accounts with 3% interest rates (30 times the U.S. national average) - despite not being FDIC insured (which is illegal). All too soon after this discovery was brought to regulator's attention, the product was rebranded as a "cash management program" and references to deposit protection were swiftly removed. Yet, the pursuit continues, as the company's second attempt has recently been made via an application for a bank charter in Push-to-Offer Traditional Banking Services with the Office of the Comptroller of the Currency (OCC).

Lastly, there are rumors that Robinhood is expecting a much bigger round of funding later this year, which could value the company at over $10 billion. This, coupled with the success of the company's latest commission-free crypto trading app, U.K. expansion, and launch of its full service bank, should make scale players in the industry such as Schwab, E-Trade, M1 Finance, and Fidelity fairly nervous. From zero-fee index funds, to zero-fee trading of single stocks. Fee-free trading apps like Robinhood, Vanguard, and FreeTrade have initiated a pricing war between scale players and themselves. So long as the strategy to fight this war remains: platforms and marketplaces who cross-sell products with the aim to retain customers and lock them into a sales cycle, this tech-enabled price war will squeeze margins down to zero. Last one to the bottom is a rotten egg.

12165165.PNG

Source: Robo-Advisors with the most AUM (via Roboadvisorpros)

download51651.png
Rowady3-1024x520.1555604603433.jpg
Phone-array-LTC-BCH.png

Source: Robinhood (via Bloomberg), Robinhood Gold (Robinhood Blog), CNBC (article), Robinhood Crypto (Robinhood Blog)

ARTIFICIAL INTELLIGENCE: Amazon's new wearable edges us closer to a reality of emotionally manipulative financial institutions

In the past, we have touched on how a specific device that you use for conversational interface interactions will be locally better at understanding you -- rather than some giant squid-like monster AI hosted on Amazon Web Services. But, what if the conversational interface device is the friendly avatar to such a terrifying AI monster that possesses the ability to emotionally manipulate its user? Well, Isaac Asimov eat your heart out, Amazon are reportedly building an Alexa-enabled wearable that is capable of recognizing human emotions. Using an array of microphones, the wrist-worn device can collect data on the wearer's vocal patterns and use machine learning to build models discerning between states of joy, anger, sorrow, sadness, fear, disgust, boredom, and stress. As we know, Amazon are not without their fair share of data privacy concerns, with Bloomberg recently disclosing that a global team of Amazon workers were reviewing audio clips from millions of Alexa devices in an effort to enhance the capability of the assistant. Given this, we can't help but think of this as means to use the knowledge of a wearer’s emotions to recommend products or otherwise tailor responses.

Let's step back for context. Edge computing is the concept that there are lots of unique distributed smart devices scattered throughout our physical world, each needing to communicate with other humans and devices. Two layers of this are very familiar to us: (1) the phone and (2) the home. Apple has become a laggard in artificial intelligence -- behind Google on the phone, and behind Amazon and Google at home -- over the last several years. Further, when looking at core machine learning research, Facebook and Google lead the way. Google's assistant is the smartest and most adaptable, leveraging the company's expertise in search intent to divine meaning. Amazon's Alexa has a lead in physical presence, and thus customer development, as well as its attachment to voice commerce. Facebook is expert in vision and speech, owning the content channels for both (e.g., Instagram, Messenger). We also see (3) the car as developing a warzone for tech companies' data-hungry gadgets.

Looking back at financial services, it's hard to find a large financial technology provider -- save for maybe IBM -- that can compete for human attention or precision of conversation with the big tech firms (not to mention the Chinese techs). We do see many interesting symptoms, like KAI - a conversational AI platform for the finance industry used by the likes of Wells Fargo, JP Morgan, and TD Bank; but barely any compete for a relationship with a human being in their regular life. The US is fertile ground for this stuff, because a regulated moat protects financial data from the tech companies. Which is likely to keep Big Tech away from diving head first into full service banking, but with the recent launch of Apple's AppleCard we are starting to see vulnerabilities in that analogy. So how long can we rely on the narrative so eloquently put by Chris Skinner"the reason Amazon won’t get into full service banking is because dealing with technology is very different to dealing with money; furthermore, dealing with money through technology is very different to dealing with technology through money"? Also, how would you feel about your bank knowing when you are at your most vulnerable?

54654654.PNG
65465.jpg

Source: Bloomberg Article, KAI Platform (via Kasisto)

ROBO ADVISORS: Robo-advisors are winning but leaving cash on the table

We will keep this brief. In a recently updated, “Robo-Advisors with the Most AUM” the top 5 robo-advisors, consisting of three Fintechs and two Incumbents, remained in the same position as last year, although each of them have seen gains in Assets Under Management (AUM) and the number of accounts. Yet, the jury is out as to whether gathering assets or gathering users are good measures of success -- we wrote about it here.

A lot of digital wealth management innovation targets people who have been excluded from the traditional wealth management business because the amounts they have to invest are too small for the economics of traditional wealth management to work. So the strategy is to target this opportunity by getting to the consumer, earn them loyalty with at least one good service, perhaps free, and then lock them into a full financial services relationship. The expected outcome of this is to see a reduction in the number of these individuals and/or the assets they hold -- Unadvised assets - the liquid cash in real wallets and check & savings accounts.

Daily fintech's Efi Pylarinou, has done the heavy lifting on this, finding unadvised assets in the US, EU, and UK to be around $14.5 trillion, $13.7 trillion, and $3 trillion respectively. Surprisingly, each of these on average have experienced growth of 9% over the past 3 years. Such findings point to the fact that, since their inception, robo-advisors have had none or a negligible impact on unadvised assets. Although unadvised assets are impacted by all innovations in Fintech, robo-advisors are more likely to be the ones that incentivise you to split up with your cash to some degree in hopes of generating returns with very little friction/costs. And if this is a direct result of trends in monetary policy, public markets, and human behavior superseding the digitization of capital markets, when should we expect the reversal to occur?

sdfdsf.PNG

Source: Robo-Advisors with the most AUM (via Roboadvisorpros)

ARTIFICIAL INTELLIGENCE: Synthesia prove that not all deep fakes are malicious, but for those that are, is Blockchain the answer to spotting them?

Last week we touched on how convolutional neural networks can be easily duped using nothing more than a computer-generated "patch" applied to a piece of cardboard (here). This week we want to keep the theme of neural networks alive, only this time addressing the fascinating topic of deep fakes. We have discussed this before (here), touching on how hyper-realistic media formats, such as images and videos, can be faked by a model where one algorithm creates images and another accepts or rejects them as sufficiently realistic, with repeated evolutionary turns at this problem. These algorithms are known as generative adversarial neural networks (GANs). Initially GANs were used in jest to make celebrities and politicians say and do things they never (here), over time, however, their sophistication has prompted more malicious use cases. Evidence of such malicious intent is reportedly coming from China in which GANs are used to manipulate satellite images of earth and/or provide strategic insight to manipulate the Chinese landscape to confuse the image processing capabilities of adversarial government GANs. Think about it, GANs, much like in our cardboard patch example, can be fooled to believe that a bridge crosses an important river at a specific point. This, from military perspective, could lead to unforeseen risk exposure to human lives, similarly so, in the context of open source data used by software to navigate autonomous vehicles across a landscape. Such malicious use cases of GANs have resulted in the concern of government entities such as The US Office of the Director of National Intelligence who explicitly noted deep fakes in the latest Threat Assessment Report (here). China has gone one step further, recently announcing a draft amendment to its Civil Code Personality Rights to reflect an outright ban on deep fake AI face swapping techniques. Currently, GANs dedicated to counteracting deep fakes are purely reactionary to those dedicated to creating them, but we are seeing novel solutions harnessing blockchain technology come from the likes of Amber - who protect the integrity of the image/video data via "fingerprinting" -- a sequenced cryptographic technique applied to bits of data associated with a single frame/image, which flags any manipulation to the original file.

But let's end this on a good note shall we. An AI-driven video production company called Synthesia used GANs to "internationalize" a message delivered by football icon David Beckham to raise awareness around the Malaria Must Die initiative. Synthesia's GANs were trained on Beckham's face so that 9 different malaria survivors could deliver their message through his avatar in their mother tongue. The resultant campaign has over 400 million impressions globally, and provides insight into the evolution of digital video marketing, corporate communications, and advertising which leverages GANs to reduce production costs and improve engagement.

deepfakes.PNG
amber.PNG
tesla ridesharing app.PNG

Source: arxiv.org (Deep Video Portraits Report), AmberVideo, Malaria Must Die (via Youtube)

INNOVATION & PAYMENTS: Tesla entering the autonomous vehicle "space race" does not bring us closer to a Utopian future, yet

It's difficult to ignore the utopian dream of riding shotgun in a fully autonomous vehicle whilst chuckling at the seemingly prehistoric ideas of road rage, congestion, and side-mirrors. Yet, upstarts dedicated to making this dream a reality ingest massive amounts of venture funding with little return. Take transportation-on-demand app Uber -- who recently raised $1 billion for its Advanced Technologies Group (ATG) from Softbank, Toyota, and auto-parts manufacturer Denso (here). The aim of the investment is to accelerate the development and commercialization of automated ridesharing services, especially given that the company blames the bulk of its estimated $702 million net loss this quarter on costs attributed to human drivers (here). Question is, how sophisticated the software has become since the 2018 incident in which a driverless Uber vehicle struck and killed a pedestrian? Interestingly, Alphabet-backed and Uber-rival Waymo, boasts racking up over 10 million miles worth of autonomous driving data as a hedge against such fatal incidents. Up until last week, Waymo prided itself as the only upstart to have launched a dedicated commercial driverless car service (Waymo One). Enter electric-vehicle giant Tesla -- who promised an all-electric, 1 million car fleet of self driving Tesla taxis by the end of 2020. Some, of which, will come from existing Tesla's on the road -- which will be used as autonomous taxis when their owners do not need them. This is noteworthy because Tesla has amassed over 1 billion miles worth of 'Autopilot' data, which was used to build their latest custom-designed artificial intelligence driving chip -- claimed to allow Tesla's to pilot themselves. The only missing pieces to the puzzle are (1) regulatory approval for such vehicles to legally operate and (2) "feature-complete" software to prevent any life-threatening incidents, both of which are assured to be ready for 2020 year end launch. 

Whilst there's no doubt that we have a "space race" type scenario between digital transportation upstarts: Waymo, Uber, and now Tesla -- all competing to arbitrage a phone's GPS to deliver custom mobility solutions with greater precision and experience than a human transaction can. There is concern around the impact that autonomous taxis will have on the existing infrastructure, especially what they will do in-between customers: park, go home, or drive around aimlessly. All of these have significant congestion implications. Such implications could incentivise upstarts aimed at offering an aggregated view of transportation options available to customers, such as CityMapper -- whose latest subscription offer 'Pass' -- exemplifies how to take this one step further by building an instantiated financial product on top of abstracted digital infrastructure (here). Until then we will continue to dream.

uber merger.jpg
waymo.PNG
tesla ridesharing app.PNG
uber vs tesla.jpg
imgonline-com-ua-twotoone-PfYb0jil3tIIBLu-625x1024.jpg

Source: BusinessWire (Uber's Advanced Technologies Group $1 billion), Waymo, Techcrunch (Tesla Ridesharing App), Techcrunch (Uber vs. Tesla), Gizmodo (Citymapper Pass)

ARTIFICIAL INTELLIGENCE: Cornell students break convolutional neural networks using cardboard

Earlier this year we touched on how the the digitization of the human animal continues unopposed, with symptoms all over. China is a great example of a sovereign infatuated with this more than any other. Harnessing sophisticated machine vision software and swarms of CCTV cameras to strengthen the sovereign-imposed social human constructs of law, power, culture and religion. Leveraging apps to do its dirty work, such as Chinese firm Megvii, maker of software Face++ that has catalyzed 5,000 arrests since 2016 by the Ministry of Public Security. Pretty scary stuff. But, as with any software, there are always ways to break it, and it seems as though the folks over at Cornell University have figured out a creative way to deceive a convolutional neural network. Using computer-generated "patches" that can be applied to an object in real-life video footage or still frame photographs to fool automated detectors and classifiers. The main use case is to generate a patch that is able to successfully hide a person from a person detector i.e., an attempt to circumvent surveillance systems using a piece of uniquely printed cardboard which faces the camera and covers some aspect of the subject's body. The accuracy of machine vision stems from the software's ability to break the image up into various filters and pixels, comparing it to thousands of digested images, and using statistics to generate a probabilistic classification of what is presented in that image. Given this, it is clear why such a rudimentary solution could fool such sophisticated neural networks. Your move China. (READ MORE)

f938b80d-c28a-4237-b2b9-eeb9031d6eaa.png
c537769f-d774-4053-9627-a404433bb4ce.png
e1a0e6ad-8d35-4912-a1b2-53ad6ecd9d30.png

INSURTECH: Softbank's $300 million double-down on digital insurer Lemonade

Last week we saw Softbank double-down on its backing for Lemonade - the renter's insurance company built for Millennials. In its Series D funding round led by Softbank and supported by Allianz, General Catalyst, GV, OurCrowd, and Thrive Capital, the poster child of disruptive InsureTech innovation, raised an additional $300 million. This latest cash injection, coupled with revenues of $60 million in 2018 and potential $100 million in 2019, puts the company at an estimated $2 billion valuation, and is set to help fuel further growth in the US and with expansion into Europe. We will remind you that Lemonade uses artificial intelligence and analytics to replace the front-office function of incumbent carriers. Simply, their mobile app can chat with users and onboard them without much human involvement. Last year, this was personified in an attempted smear ad run by competitor - StateFarm, who ridiculed the usage of bots and technology in insurance, mentioning “a knockoff robot created by a rival insurance company.” Needless to say that the digital insurer took that lemon and made...well...lemonade - sponsoring the ad across social media, essentially because it promoted Lemonade's AI tech. Last year, we mentioned that Softbank's portfolio of millions of American financial services companies with modern technology stacks and cool brands, spread across different verticals, requires only one of them to be a Goldman Sachs. Could this news be a sign?

Imagen 1 high(1).jpg
quartely_sales-1024x536.png
ad005fe7-7eb1-4c62-97a6-65c11271f04c.jpg
802d93e3-2137-41c2-88a4-6710929af1de.png

Source: DigitalInsuranceAgenda (Lemonade), Lemonade (2018 Results), Youtube (Lemonade - StateFarm Ad), Twitter (Daniel Schreiber)

BIG TECH: YouTube, Facebook and NVIDIA powering hyper-realistic human avatars

The digitization of the human animal continues unopposed, with symptoms all over. Chinese firm Megvii, maker of software Face++ that has catalyzed 5,000 arrests since 2016 by the Ministry of Public Security, is looking for an $800 million IPO. The other champion of public/private surveillance, Facebook, is working a virtual reality angle. The company is improving the technology used to model rendered avatars of human faces, which can then be displayed across virtual environments. Using multi-camera rigs and hours of facial movement footage, Facebook is building neural networks that learn how to translate realistic facial muscle movement into models. The Wired article linked below is worth exploring for the videos alone, and the uncannily realistic motion these animation possess.

One of our recurring points is that frontier technologies -- AI, AR/VR, blockchain, and IoT -- appear disparate now, but are intricately connected. Take for example the new feature from Google called YouTube Stories. Similar to SnapChat and Instagram, video creators can apply 3D augmented reality overlays to their faces. While this technology looks like virtual reality rendering, it is primarily a machine vision (i.e., AI) problem to anchor rendered objects to a human face realistically. To do this Google provides a developer library called ARCore, not to be confused with Apple's ARkit. Human video avatars can be further extended and customized with code -- the twenty first century version of personal branding.

Another take on the same issue comes from generative adversarial neural networks (GANs). We've discussed before how hyper-realistic images and videos can be faked by a model where one algorithm creates images and another accepts or rejects them as sufficiently realistic, with repeated evolutionary turns at this problem. Highlighted below is a recent software release from NVIDIA, where a drawing of simple shapes and lines is rendered by a GAN into what appears to be a hyper-realistic photo of a landscape. We can imagine a similar approach being applied to the output generated by Facebook's avatars, which still border on creepy, to ground the outcome in reality. Little details, like a reflection of a cloud on water, are hallucinated by GANs automatically, based on massive underlying visual data. Expect these digital worlds to become increasingly indistinguishable from reality, and to spend way more time living in them for the years to come.

c0552a77-9b22-4f32-8c09-36f2ca71f5ac.jpg
water.PNG

Source: SCMP (Face++), Wired (Facebook Avatars), NVIDIA (GAN drawing)

BIG TECH: The macro-scams of Fyre and Theranos & the micro-scams of Google and Facebook

Spoiler alert: Fyre Festival ended up being a securities fraud that cost investors $27 million dollars, left hundreds of workers unpaid and emotionally ravaged, and negligently put attendees in dangerous conditions. Even Blink-182 cancelled their performance! Another spoiler: Theranos ended up being a securities fraud costing investors (including Betsy DeVos!) $700 million, leaving hundreds of workers unpaid and pushing at least one to suicide, negligently putting users of the product in dangerous medical circumstances. In both cases, the founders were young and narcissistic, optimizing the story-telling about their company over delivering on the promised expectations. Billy McFarland used Instagram supermodels to sell a false vision. Elizabeth Holmes leveraged the Steve Jobs black turtleneck and VC group think to do the same.

This stuff is so easy in retrospect -- to point fingers and throw the stone. Having spent a lot of time in the early stage ecosystem, we can tell you that all founders have these devils inside them. These are the devils that let you take the risk, tell the story and defend your tribe (e.g., see Elon Musk). The issue is that these particular people could not and did not execute -- and any reasonable person in their situation would know enough to stop marketing and selling lies. We can look at crypto ICOs to date and say the same thing. Surely the people who raised over $30 billion globally, and burned nearly all of it, sold us a falsehood. Some -- like John McAfee or Brock Pierce -- had to know what was up. Or did they, perhaps believing in a zeitgeist change tilting the axis of human industry? 

The issue is asymmetric information and intent to profit from that asymmetry. When someone sells us a broken car claiming it works great, they are selling a "lemon" -- something the US protects against with "lemon laws" that remedy damages from relying on false claims. Let's shift from these obvious macro lemons, to the invisible micro lemons sold by Facebook and Google. It was revealed that Facebook was -- in the worst case -- paying 13+ year olds $20 per month to install a research app that scraped all their activity (from messages to emails to  web) and provided root permissions to the phone, misusing Apple-issued enterprise certificates. Facebook should not have been able to create these apps for anyone other than its employees on internal apps (e.g., bug testing new versions). But it did, and got its access revoked by Apple immediately.

Google did a version of this too, exchanging gift cards for spying on web traffic. As yet another example, Google's employees refused to help the company build a war-drone AI for the US Department of Defense. So instead, Google outsourced the work to Figure Eight (a human-in-the-machine company), hiring gig economy workers for as little as $1 per hour for micro-tasks like identifying images (teaching drones to see). These workers had no clue what they were doing -- and we imagine that some would exhibit the same ethical concerns that Google employees did in refusing the work. In all these tech company examples, the lemon is the un-revealed total cost. Compared to Fyre and Theranos, where we pay billions, and get nothing in return, here we are given $1 an hour or $20 per month (i.e., nothing), but we lose our privacy, agency and humanity (i.e., everything). 

cbef13a2-5fb2-44be-a753-14836463401b[1].jpg
441ca0eb-3a96-4cec-ab2b-5f6ab911ddec[1].jpg
b83d1247-1bc8-4dd7-b1d3-1d0cfab382c9[1].png
3a1235ce-7bd7-4f7e-bc74-1fd2b1b46a11[1].jpg

Source: Wired (Project Maven), The Intercept (Google project maven), Gizmodo (Google micro-tasks), TechCrunch (Facebook, Google, Apple), Wikipedia (FyreTheranosLemon Law)

INSURANCE: Porsche and Mile Auto to cut premiums 40% using AI for pay-per-mile insurance.

Insurance is the holy grail for Artificial Intelligence and the Internet of Things in finance, because it requires a messy interaction with the physical world, rather than living merely in a spreadsheet, database, or blockchain. To this end, we like the news of Porsche partnering with Mile Auto on pay-per-mile insurance. There is a reasonable demand-side argument: owners of Porsches don't drive the car as a primary automobile, and would prefer to only pay insurance for the time they are actually on the road. The second argument is even more fun -- owners of Porsches don't want to be tracked via GPS or a black-box by something like Cambridge Mobile Telematics ($500MM from Softbank) or Metromile ($90MM from VCs) because they are fancy and private people. No tracking please!

How does the thing work? You pay a cheap base rate to Mile Auto, and once in a while take a picture of the speedometer's reading in the app. The picture is translated to numbers via a machine vision algorithm, and your per-mile variable insurance rate is calculated on the spot. The company claims this will lead to a 40% reduction in premiums for the average user. For what it's worth, we hear that the growth of renter's insurer Lemonade is similarly fueled by people who are forced to get coverage (e.g., by the landlord) but are looking for the most discounted, easy to manage product. What does that mean? It means that the low risks self-select out of the insurance pool, driving up the price for unsophisticated non-techies that don't drive a Porsche.

Let's take the argument to an absurd extreme. On the developer website Programmable Web, there are 59 separate APIs that developers can use to build insurance apps and connect into underwriting engines and carrier capital. From Clearcover (affordable car insurance in your app!) to Haven Life (term life insurance on any website or application!) to Lemonade, OCBC Materntity, Qover and a plethora of others, developers have real choice in how to weave these more digital insurance products into the attention black holes in your phone. What happens when the tech-forward customer considers only these options, and the conservative customer considers only insurance sold by agents and direct mailing? Could there be a bifurcation of risk profiles that fundamentally injures the risk-pooling function of the industry? Perfect information about risk collapses the value of hedging. Half of us will know and live in a predicted future, while the other half will pay for the ignorance.

dc33b340-6e91-464e-ae0c-2e4cdb64883a[1].png
cb8107d5-26fd-4ee4-89d6-b66aac565087[1].png
5b9fd9a3-e19d-4096-bf5e-cea629568e49[1].png
a9eebfbd-05f1-4bc7-8e9e-29a0fea81f6e[1].png

Source: PR Newswire (Porsche), Company websites, Programmable Web (Insurance)

ARTIFICIAL INTELLIGENCE: "Financial Deadbeats" map is the worst things about Chinese Fintech

In our continued amazed gawking at the Chinese fintech landscape, we bring you the following. There is now a feature within WeChat, one of two channels for all mobile chat communication, to show a map of "financial deadbeats" around you. That's right -- a shaming visualization of people who are in financial trouble, like some sort of public sex offender list. We link to the article below, and assume that it is true despite how preposterous the whole thing seems. 

Offenses that could land you on the blacklist include serious ones like being the founder of a digital lender that collapsed with 12 million unpaid accounts, and trivial ones like being a single mother embroiled in a divorce proceeding. Once you are on the list, not only will your full name and financial information be public entertainment on this app, but access to credit, commerce and university admission could be revoked. To add insult to injury, a special ringback tone is added to the "discredited" person's mobile phone, alerting any potential caller about your poor financial management skills.

We add to this soup the idea of algorithmic bias exhibited by AI based on training data. We've covered this issue in the past, but point to Rep. Alexandria Ocasio-Cortez (D-NY) recently bringing it up into mainstream conversation. From propaganda bots to algo-racism, these arcane issues are starting to concern the broader Western polity. So when you combine historical training data reflecting past social and economic biases with social media enforcement systems, dystopia calls. One of the most important financial innovations in the West was bankruptcy, allowing entrepreneurs to fail and start over. This normalization of financial wipe-out led to an equilibrium with higher risk-taking and innovation. It is chilling to see technology being used, with potential for error and misuse, to stifle that spirit. Based on the US personal bankruptcy data below, you can see that 6 out of 1000 people would be guilty according to WeChat, skewed in large part to minority populations. No thanks. 

59ca575a-39a6-4f36-a60c-89c9ccfd9a01[1].jpg
f1102030-8e90-4366-8bcf-31e14f4b774b[1].gif
0f2b8c5a-2d85-479c-9ab1-239e47436d17[1].png

Source: Abacus News (deadbeat map), Independent (deadbeats), Vox (algo-racism), On bankruptcy normalization and bankruptcy zip codes

ARTIFICIAL INTELLIGENCE: Evolution of Creative AI and WeChat's Payment Score

One ongoing, false refrain is that machine learning does not generate creative outcomes. Increasingly, this is proven wrong by the technologists and artists playing with the technology. What started several years ago as "neural style transfer" (i.e., transferring Picasso's visual DNA to any photo) has moved on to BigGAN, which is a machine learning algorithm to manufacture images that appear realistic but are made from machine hallucination. Notably, artists are playing not just with the realistic versions of these hallucinations, which you can see below, but with the "latent space" in between. This mathematical term for interpolation is filled with abstract, surprising, and surreal outcomes. Our takeaway from these results is both (1) that machines will be far more precise in understanding and approximating humans than we assume, and (2) that machines will be far better at creativity that we assume.

Fitting a financial product to a ranked "perception" of a human being matters -- especially when it is done at a scale of a billion people. Tencent's WeChat is running a new initiative called "WeChat Pay Score", which is analogous to the Alipay's "Sesame Credit", both of which (we expect) flow to the Chinese government to make up the national social credit score. Sesame Credit looks at 5 dimensions: safety, wealth, social, compliance, and consumption from over 3,000 specific data points collected by the app. The WeChat version is collecting data on how users chat on the messenger, what they read and buy, where they travel, and how they run their life in general. These combined attributes grant access to perks, like waiving bank account minimums.

Listen, in a massive nation where a large swath of the population doesn't have traditional financial data or bank accounts, machine-learning based estimates of credit-worthiness are a life saver. Not every economy comes with a FICO score and legacy credit agencies (though the Equifax breach wasn't particularly kind to incumbents).  But they key question comes back to the two picture sets below. Do the machines see us like those perfectly generated, accurate pictures of people? Or like the surreal goo in abstraction? The former means distributed access to well-suited financial products, while the other is a Black Mirror nightmare.

8779262f-e84c-4f5f-b44d-760b434eb989[1].jpeg
7c2e985e-bd99-4c14-9acc-4524618f4782[1].png
17bad1d6-62d4-495d-ba05-3dfbed2985d7[1].jpg

Source: Medium (GANs), Joel Simon (GANreeder), TechCrunch (WeChat)

INSURTECH: Rage Against the Machine and $500MM telematics Softbank investment

Let's start off with the ridiculous, and get more ridiculous. SoftBank has a lot of money to invest in category killing fintech businesses, and one of the latest such players is Cambridge Mobile Telematics, which just received $500 million from the investor. What is it? A widget attached to a car windshield, and then used to collect data about the quality of a particular driver -- from speeding to breaking. This data is then tied to the purchasing of insurance, where "good" drivers have access to lower cost financial products. This is an interesting, and pioneeing, example of how edge computing will create orders of magnitudes more digital data that then feeds the manufacturing of finance. 

A sneaking suspicion in the back of our minds is that driving data is really good for training robots how to drive. Meaning, Google and the rest of the big tech companies are all running experiments with self-driving cars on the road to collect driving data. Something simple from a telematics device certainly is not equivalent to major machine vision and radar data. But it does paint a straight line towards how self-driving car insurance should be priced. Let's repeat that. If a widget in a car tells you insurance prices based on driving performance and you combine that with an AI car, you could compare humans and machines on an apples to apples basis.

The ridiculous part is the human response to tech-first transportation companies. In London, Chinese bike-sharing company Ofo is pulling out of the city because people steal and destroy their untethered bikes. In California, aspiring freedom fighters keep throwing scooters from Bird and Lime into oceans, lakes and rivers. Public service employees are straining to fish out these venture capital funded wonders out of the water. In Phoenix, self-driving Waymo cars are getting their tires slashed and assaulted by gun-wielding road-ragers (Mad Max style, we assume). All that to say that the human element in this story is allergic to being entirely prodded, measured, and automated away. Can politics catch up with SoftBank's Vision Fund, which could build Trump's wall 20 times over? We hope so.

d2233e7a-7e6d-4bc2-811d-f7328d39763e[1].jpg
d68567b1-b834-4a08-b05f-7ad055f5a3d6[1].jpeg

Source: DigIn (Softbank), Gizmodo (Ofo), Slate (Bird), Business Insider (Waymo)

2019 FINTECH PREDICTION: Government and Enterprise Platforming, led by AI and Mixed Reality

Source: Images from Pexels,     2019 Keystone Predictions Deck

Source: Images from Pexels, 2019 Keystone Predictions Deck

Over the last decade, consumer tech has undergone a cycle of platform building, user aggregation, data mining, and value extraction, resulting in GAFA monopolies. Exhaustion with Facebook and the adjacent issues of privacy and radicalization, in our view, will lead to problems building new splintered consumer attention platforms for AI, AR/VR and other new media ground up.  This implies that consumer platforms based on new technologies will be much more long-tail oriented, serving niche markets with very strong fit. Communities may be passionate, but smaller.

Enterprise tech lags retail adoption by, give or take, 5 years. Similar platforming has not fully penetrated on the enterprise side -- Salesforce is not yet the AI monopoly we should all fear, and Open Banking is barely a fizzle. Therefore, we expect increasing data transparency, aggregation and monetization to occur in enterprise underwritten by venture capital investors. As an example, augmented reality adoption and economics will be driven primarily by municipalities, utilities, large industrial manufacturers, and the military. Similarly, artificial intelligence at scale (and its meeker cousin Robotic Process Automation) are to be directed largely at the workflows and manufacturing processes of large corporates. Dont' get us wrong -- consumer AI is extremely important -- but within Financial Services, the scope for this in the corporate world is even larger.

The corollary is that the pricing pressure that started in consumer Fintech -- roboadvice (150 bps to 25 bps) or in remittance (600 bps to 10 bps) -- will spill over into B2B banking, money movement, insurance, treasury management and product manufacturing. An inevitable outcome is pressure on profit margins as prices equilibriate. For those companies that are able to re-design operations using a digital chassis, they will be able to compete on the margin with Fintech unicorns. Those that are not should exit, or retreat into more bespoke, relationship-driven business lines. 

ARTIFICIAL INTELLIGENCE: Morgan Stanley, Yext and Chinese AI-first Apps.

A point is not enough. It takes two points to make a trend-line, at least in a two dimensional space. One of the muscles we try to flex often is to connect points in different sectors and themes to see the limits of the possible. Let's contrast the following: (1) Morgan Stanley partnering with Yext for financial advisor business pages, and (2) Andreessen Horowitz' commentary on Chinese consumer artificial intelligence applications on a path to capture the hearts of teenagers everywhere. Disparate, funky, and painfully obvious.

About ten years ago, "hyper-local" became a venture catchphrase. News would go from being general to local, video would go from main-stream to niche, and so on, contextualized by the GPS in our pockets. Yext is a company that won one of the battles for hyper-local content by building the retail knowledge graph that gets printed on Google Maps. Simply, if you see a business listing for a laundromat on your Maps app, likely the app provider is licensing local data from Yext. This data then scales up into pre-made business websites, analytics, and customer funnel conversion. Morgan Stanley inked a partnership with this scale content manager to give their 15,000 financial advisors a digital presence. Controlling and printing out that content at scale, with embedded compliance and into every Google/Apple phone, is hard and smart. And perhaps physical presence is the main value of a human advisor.

Now for Chinese AI. Unlike Americans, with their hand-wringing about privacy, choice, and human agency, Chinese apps don't care. The next generation version of Instagram and Snapchat is called TikTok, and the storied venture firm Andreessen celebrates them for taking away any human choice in what content a user would see. The algorithm is not a search support tool, it is the only and ultimate arbiter of where your attention goes. And it tends to make kids happy (unlike Youtube, which generally makes them into Twitter trolls). 

So let's mesh these things together. A financial services version of TikTok with a Yext overlay would be an app that is tied to the physical world, perhaps through Augmented Reality or just simple Maps, that would decide for you which financial provider to find. It would know that you still want to talk to a person for that emotional connection, and would find one that's closest geographically and a best-fit emotionally -- a two factor optimization problem for an AI. Yext financial advisor reviews, combined with a Morgan Stanley risk/behavioral client questionnaire could do this. Thus the TikTok aspect kicks in, with the human in the loop simply being a form of physical content marketing, gaming the algorithm with a meatspace presence. 

98edf5ec-17a0-419f-95db-a4e668a4d7e5[1].jpg
3c004f7a-4647-426c-9c4d-95eb75713c85[1].png

Source: Finextra (Yext), Andreessen Horowitz (AI apps), FactorDaily (App downloads),