In the past, we have touched on how a specific device that you use for conversational interface interactions will be locally better at understanding you -- rather than some giant squid-like monster AI hosted on Amazon Web Services. But, what if the conversational interface device is the friendly avatar to such a terrifying AI monster that possesses the ability to emotionally manipulate its user? Well, Isaac Asimov eat your heart out, Amazon are reportedly building an Alexa-enabled wearable that is capable of recognizing human emotions. Using an array of microphones, the wrist-worn device can collect data on the wearer's vocal patterns and use machine learning to build models discerning between states of joy, anger, sorrow, sadness, fear, disgust, boredom, and stress. As we know, Amazon are not without their fair share of data privacy concerns, with Bloomberg recently disclosing that a global team of Amazon workers were reviewing audio clips from millions of Alexa devices in an effort to enhance the capability of the assistant. Given this, we can't help but think of this as means to use the knowledge of a wearer’s emotions to recommend products or otherwise tailor responses.
Let's step back for context. Edge computing is the concept that there are lots of unique distributed smart devices scattered throughout our physical world, each needing to communicate with other humans and devices. Two layers of this are very familiar to us: (1) the phone and (2) the home. Apple has become a laggard in artificial intelligence -- behind Google on the phone, and behind Amazon and Google at home -- over the last several years. Further, when looking at core machine learning research, Facebook and Google lead the way. Google's assistant is the smartest and most adaptable, leveraging the company's expertise in search intent to divine meaning. Amazon's Alexa has a lead in physical presence, and thus customer development, as well as its attachment to voice commerce. Facebook is expert in vision and speech, owning the content channels for both (e.g., Instagram, Messenger). We also see (3) the car as developing a warzone for tech companies' data-hungry gadgets.
Looking back at financial services, it's hard to find a large financial technology provider -- save for maybe IBM -- that can compete for human attention or precision of conversation with the big tech firms (not to mention the Chinese techs). We do see many interesting symptoms, like KAI - a conversational AI platform for the finance industry used by the likes of Wells Fargo, JP Morgan, and TD Bank; but barely any compete for a relationship with a human being in their regular life. The US is fertile ground for this stuff, because a regulated moat protects financial data from the tech companies. Which is likely to keep Big Tech away from diving head first into full service banking, but with the recent launch of Apple's AppleCard we are starting to see vulnerabilities in that analogy. So how long can we rely on the narrative so eloquently put by Chris Skinner"the reason Amazon won’t get into full service banking is because dealing with technology is very different to dealing with money; furthermore, dealing with money through technology is very different to dealing with technology through money"? Also, how would you feel about your bank knowing when you are at your most vulnerable?