internet of things

INTERNET OF THINGS & APIs: The Internet of Things wasn't really a thing

Look, we love our buzzwords as much as you do, but this one has gone on long enough. The Internet of Things (IoT) is one such buzzword that is synonymous with product design, connectivity, infrastructure, and the future. Pretty broad right? To enforce this point, IoT can be simply defined as a network of interconnected digital devices in order to exchange data. Doesn't this sound like the definition for the internet? Effectively the Internet of Things is purely a term of scale, in which the "things" are any device that can be connected to the internet. The issue of scale has resulted in a mass of tech companies -- such as GoogleAppleLGSamsung, and Huawei -- each building and protecting their own IoT solution vertical with which they compete. An example of such verticals that fall into the scope of IoT include automated temperature, lighting, and security controls for your home, or fleet tracking and driver safety controls for a logistics company. For a consumer, having multiple apps to control the functions of their home is no better than using the analogue controls IoT sought to replace. For regulators, ensuring the safety, reliability, standardization and efficiency of each solution has massively hindered the deployment of IoT across the globe.

The assumption that the future of technology relies on faster, better, newer, and more hardware is debatable. Something that big tech companies like Apple are starting to realize. Rather, the future of technology should be centered around machines working together to make magic. How this is achieved is via the gatekeepers enabling the solutions -- Application Programming Interfaces's (APIs). Essentially APIs store and dispense both data and services for hardware and software. Enabling the data source(s), the data consumer(s), and the tech manufacturer(s) the opportunity to compete within the foreign land of tech platforms (i.e., App stores and e-commerce). This generally means prices fall and economic rents go to fewer winners that have strong APIs, integrations, and a nimble balance sheet. Consumer facing services such as ZapierIFTTT, and Signalpattern form part of an emerging segment, allowing for consumers and businesses to connect devices and services together to build truly innovative solutions. Similarly, payments Fintech InstaReM launched an API-based digital B2B platform enabling companies to create their own branded credit cards. Via APIs to InstaReM's card-issuing platform, customers are said to have greater control over the creation, distribution and management of card accounts -- a Visa-supported parallel to Brex.  

For the network of interconnected digital devices in order to exchange data to succeed device manufacturers need to open their APIs, like items on a menu, and users assemble them together into the perfect meal. The level of inter-connectivity we are talking about here is, for example, when the machinery in a factory stops operating whenever a maintenance person swipes into the main floor, or your car's navigation depended on your calendar, financial well-being/budget, and personal well-being (taking scenic routes when stressed). That is truly an internet of API-enabled things. READ MORE.

iot_api_platform.png

Source: Nordic APIs (APIs power the Internet of Things)

ARTIFICIAL INTELLIGENCE: Proof that we have been training AI fakes to stab us in the back

In the 1933 film Duck Soup, actor Chico Marx is famously known to have asked, "who ya gonna believe, me or your own eyes?" Fairly meaningless in the 30s, but today, it's more relevant than ever. Let us explain. We know how the ever-expanding capacities of computing power and algorithm efficiency are leading to some pretty wacky technology in the realm of computer vision. Deepfakes are one of the more terrifying outcomes of this. A deepfake can be described as a fraudulent copy of an authentic image, video, or sound clip, which is manipulated to create an erroneous interpretation of the events captures by the authentic media format. The word 'deep' typically refers to the 'deep learning' capability of the artificially intelligent algorithm trained to manifest the most realistic version of the faked media. Real-world applications being: Former US president Barack Obama saying some outlandish things, Facebook founder Mark Zuckerberg admitting to the privacy failings of the social media platform and promoting an art installation, and Speaker of the US House of Representatives Nancy Pelosi made to look incompetent and unfit for office.

Videos like these aren’t proof, of course, that deepfakes are going to destroy our notion of truth and evidence. But it does show that these concerns are not just theoretical, and that this technology — like any other — is slowly going to be adapted by malicious actors. Put another way, we usually tend to think that perception — the evidence of your senses (sight, smell, taste etc.) — provides pretty strong justification of reality. If something is seen with our own eyes, we normally tend to believe it i.e., a photograph. By comparison, third-party claims of senses — which philosophers call “testimony” — provide some justification, but sometimes not quite as much as perception i.e. a painting of a scene. In reality, we know your senses can be deceptive, but that’s less likely than other people (malicious actors) deceiving you.

What we saw last week took this to a whole new level. A potential spy has infiltrated some significant Washington-based political networks found on social network LinkedIn, using an AI-generated profile picture to fool existing members of these networks. Katie Jones was the alias used to connect with a number of policy experts, including a US senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Although there's evidence to suggest that LinkedIn has been a hotbed for large-scale low-risk espionage by the Chinese government, this instance is unique because a generative adversarial network (GAN) -- an AI method popularized by websites like ThisPersonDoesNotExist.com -- was used to create the account's fake picture.

Here's the kicker, these GANs are trained by the mundane administrative tasks we all participate in when using the internet on a day-to-day basis. Don't believe us? Take Google’s human verification service “Captcha” – more often than not you’ve completed one of these at some point. The purpose of these go beyond proving you are not a piece of software that is unable to recognise all the shopfronts in 9 images. For instance: being asked to type out a blurry word could help Googlebooks’ search function with real text in uploaded books, or rewriting skewed numbers could help train Googlestreetview to know the numbers on houses for Googlemaps, or lastly, selecting all the images that have a car in them could train google’s self-driving car company Waymo improve its algorithm to prevent accidents.

The buck doesn't stop with Google either, human-assisted AI is explicitly the modus operandi at Amazon’s Mechanical Turk (MTurk) platform, which rewards humans for assisting with tasks beyond the capability of certain AI algorithms, such as highlighting key words in an email, or rewriting difficult to read numbers from photographs. The name Mechanical Turk stems from an 18th century "automaton" or self-playing master chess player, in fact it was a mechanical illusion using a human buried under the desk of the machine to operate the arms. Clever huh?!

Ever since the financial crisis of 2008, all activity within a regulated financial institution must meet the strict compliance and ethics standards enforced by the regulator of that jurisdiction. To imagine that a tool like LinkedIn with over 500 million members can be used by malicious actors to solicit insider information, or be used as a tool for corporate espionage, should be of grave concern to all financial institutions big and small. What's worse is that neither the actors, nor the AI behind these LinkedIn profiles can be traced and prosecuted for such illicit activity, especially when private or government institutions are able to launch thousands at a time. 

maxresdefault.jpg
54646.PNG
800.jpeg
captcha_examples.jpg
amazon-mechanical-turk-website-screenshot.png

Source: Nancy Pelosi video (via Youtube), Spy AI (via Associated Press), Google Captcha (via Aalto Blogs), Amazon MTurk

ARTIFICIAL INTELLIGENCE: Amazon's new wearable edges us closer to a reality of emotionally manipulative financial institutions

In the past, we have touched on how a specific device that you use for conversational interface interactions will be locally better at understanding you -- rather than some giant squid-like monster AI hosted on Amazon Web Services. But, what if the conversational interface device is the friendly avatar to such a terrifying AI monster that possesses the ability to emotionally manipulate its user? Well, Isaac Asimov eat your heart out, Amazon are reportedly building an Alexa-enabled wearable that is capable of recognizing human emotions. Using an array of microphones, the wrist-worn device can collect data on the wearer's vocal patterns and use machine learning to build models discerning between states of joy, anger, sorrow, sadness, fear, disgust, boredom, and stress. As we know, Amazon are not without their fair share of data privacy concerns, with Bloomberg recently disclosing that a global team of Amazon workers were reviewing audio clips from millions of Alexa devices in an effort to enhance the capability of the assistant. Given this, we can't help but think of this as means to use the knowledge of a wearer’s emotions to recommend products or otherwise tailor responses.

Let's step back for context. Edge computing is the concept that there are lots of unique distributed smart devices scattered throughout our physical world, each needing to communicate with other humans and devices. Two layers of this are very familiar to us: (1) the phone and (2) the home. Apple has become a laggard in artificial intelligence -- behind Google on the phone, and behind Amazon and Google at home -- over the last several years. Further, when looking at core machine learning research, Facebook and Google lead the way. Google's assistant is the smartest and most adaptable, leveraging the company's expertise in search intent to divine meaning. Amazon's Alexa has a lead in physical presence, and thus customer development, as well as its attachment to voice commerce. Facebook is expert in vision and speech, owning the content channels for both (e.g., Instagram, Messenger). We also see (3) the car as developing a warzone for tech companies' data-hungry gadgets.

Looking back at financial services, it's hard to find a large financial technology provider -- save for maybe IBM -- that can compete for human attention or precision of conversation with the big tech firms (not to mention the Chinese techs). We do see many interesting symptoms, like KAI - a conversational AI platform for the finance industry used by the likes of Wells Fargo, JP Morgan, and TD Bank; but barely any compete for a relationship with a human being in their regular life. The US is fertile ground for this stuff, because a regulated moat protects financial data from the tech companies. Which is likely to keep Big Tech away from diving head first into full service banking, but with the recent launch of Apple's AppleCard we are starting to see vulnerabilities in that analogy. So how long can we rely on the narrative so eloquently put by Chris Skinner"the reason Amazon won’t get into full service banking is because dealing with technology is very different to dealing with money; furthermore, dealing with money through technology is very different to dealing with technology through money"? Also, how would you feel about your bank knowing when you are at your most vulnerable?

54654654.PNG
65465.jpg

Source: Bloomberg Article, KAI Platform (via Kasisto)

AUGMENTED REALITY: Government and Military Use Will Drive Magic Leap, HoloLens Adoption.

Last week, we spent a bunch of time talking about how consumer VR as a standalone platform is not turning out to be as good as iTunes, the iPhone, YouTube or the Web. One problem was the form factor, another problem was the lack of pirated content -- though games and adult content will slowly address this. This week, we want to point to IoT (Internet of Things) and Augmented Reality (AR). Do these themes have a reason for being and are they an opportunity for a major retooling of our interaction with technology? Here, we think the answer is a stronger Yes. But this is due to a surprising reason -- government and military use.

The Web was popularized through consumer use and now powers our digital selves. But it was brought to life and initial use as ARPANET in the 1960s through funding by the US Department of Defense. Imagine unlimited funding with life and death use cases by a nationally-embedded client base. This is also what the Chinese government is doing in relation to AI, blockchain and quantum computing, and get to the meat. First, Bloomberg reported that AR companies Magic Leap and Microsoft's HoloLens are bidding on a $500 million augmented reality Army project. The order is for 100,000 headsets which would run the Integrated Visual Augmentation System, overlaying intelligence on the physical world. These would be used for both training as well as in live combat. The manufacture of these types of devices would create an economic base on which consumer versions could be created, as well as condition a whole generation that using AR headsets is normal.

Another data point supporting this idea is the investment by local government entities (e.g., UK councils) in digital twins of their neighborhoods for urban planning. In particular, Liverpool is running a £3.5 million IoT program that combines the rollout of a 5G network with innovative health and social care services for residents. Of the 11 proofs of concept in place, examples include video connection between vulnerable people at home and their pharmacy, AR maps that bridge physical distance and combat social isolation, and sensors that monitor whether older adults are dehydrated. Similarly, earlier this year, Bournemouth was mapped into 3D, incorporating 30 different data sets, also as part of planning the 5G network. These live 3D maps, which could then be projected into the real world via AR devices, are a social good and should be part of centralized infrastructure. This in turn can further move the needle in consumer adoption and market maturity.

0e00e9e2-6a1b-45c4-a23f-ac332da0d5dc[1].png
c3690017-d12b-4b95-b3e9-57a42b60a336[1].jpg

Source: Magic Leap (BloombergNext RealityDaily Mail), UK Authority (LiverpoolBournemouth), Wikipedia (ARPANET

INSURANCE: $300MM Acquisition of IoT MiddleWare by Munich Re

a7b21e18-a677-488a-a9b4-3a5bcff23fb0[1].png

Let's move into the physical world. Very very physical. Reinsurance company Munich Re has just written a $300 million check to acquire Relayr, a startup in which it previously invested. Relayr digitizes industrial manufacturing, installing IoT sensors in production lines on various machines that capture information at the edge, layering an artificial intelligence layer that helps maintain these machines before a breakdown happens, and then integrates this information into manufacturing enterprise software through middleware. Like in the consumer world, data exhaust can power the automation of human intelligence, but it must first come from the digital twins of physical objects.

The phrase that stuck with us was that the company's solution "reduces the risk of failure". For an insurance company that wants to minimize losses and improve underwriting accuracy (i.e., know the risks better and take better bets on average), more data and transparency goes directly to the bottom line. Insurance companies are data science companies (more so even than advertisers), so we think they are in a unique position to apply AI to the physical world. A cute question: will Google underwrite insurance for its own self-driving car, or can an insurance company start selling third party cars with built in IoT insurance after learning all the risks? 

We point to a few more symptoms in the sources below. Oxbow Partners, an insurtech research firm, just highlighted Geospatial Insights as an interesting machine vision implementation on top of satellite data. The resulting data sets include oil tanker inventory, retail parking lot car counts, crop yield predictions, and real estate infrastructure value. At least 50% of the business is insurance companies, with the rest going to investors and strategy teams. Oxbow suggests that the main barrier to success is integration of such data into workflow and middleware -- something that Relayr had clearly gotten right. If you're hungry for more Insurtech, check out below a top 49 trends article from Tearsheet, and a screenshot of a chatbot from Hi Marley, a private label insurance automated customer agent platform.

09977efd-1725-4259-98c9-dc7d0901eebd[2].jpg
afd15714-f63b-44c8-9ad8-8dc8bf31daa4[1].png
b97a3a07-6c49-4ec1-8480-0f5b3f69e64a[1].png

Source: Companies (RelayrGeospatial Insight), Business Insider (Relayr), DigIn (Hi Marley), Tearsheet (49 trends)

INSURANCE: Catch 22 of Insurtech Transformations

ed024bac-5aa1-44d0-a955-8efea621a951[1].png

When we looked at Insurance in our artificial intelligence deep dive, nearly $400 billion of cost was up for grabs as a result of the platform shift. But there's a catch. Getting to the other side can look pretty much impossible for an incumbent. Ripping out legacy systems, which support billions of dollars of revenue, and sailing into an unproven direction is not a popular choice for a public company CEO. So instead of jumping onto entirely modern architecture, companies like John Hancock partner with transformation consultants / software providers like Infosys for multi-year $10 to $100 million projects. Further, according to the innovation officer of MetLife, only 30% of execs really "get it", and even then you only have 18-24 months of runway before the company starts to ask for operating results. Compare that with the 5-7 years that are afforded a new startup to get off the ground. And we know that startups are much faster at execution.

On the other hand, we see the symptoms of fundamental change all across the space. For example, Allstate had to send 3,000 employees to assess the damage from hurricanes Harvey and Irma last year. But it also deployed drones (trained on 5,000 hours of prior flight time), which provided needed image data before the humans even got to the location. How soon will drone pilots and video Facetime agents replace the traveling adjusters? Similarly, companies like Roost are deploying telematics in homes, retrofitting old smoke alarms to detect water damage, weather issues, and other dangers, with connected data streams into smart phones and monitoring systems. If this data is real time and builds out the IoT/AI corpus, what need is there for human assessment in the majority of cases?

The idea that terabytes of daily data from smart systems can interact with legacy insurance infrastructure seems untenable. But in 18-24 months of execution, the best outcome is that these products can become mere bolt-ons. Compare that to Lemonade's approach, which is open sourcing its insurance policies on GitHub and practicing radical transparency on its metrics and approach. And further, entirely new cyber risks are emerging, around which traditional insurers have no systems at all. Crypto projects like Coinsurance (pay out in case an Initial Coin Offering fails to list on an exchange) and Coin Governance System (pay out in case of ICO scam) are rethinking the bundling of financial products which are becoming top of mind to many Millennials, 5-10% of which own crypto currencies. Such new entrants will need to get enough scale for the old guard to believe that the world is changing -- see you in 5 years.

Source: DigIn (30% ExecsDrones), Coverager (John HancockRoost), LemonadeCoin Governance System

FINTECH: Minority Report Retail

Source: Bloomberg, Farfetch

Source: Bloomberg, Farfetch

Sometimes we need a little bit of Minority Report science fiction to get the imagination going. And sometimes, that science fiction is already built and can be seen in a functional retail store. In what may be interpreted as an Amazon-taunting art exhibit, luxury boutique Farfetch has created a tech-forward store experience that hits at the future of connected retail. 

Here are the functional concepts: (1) a customer profile activated by smartphones upon entering the store for store staff, (2) a smart clothing rack that is able to sense what objects customers pick up and put back on the rack using ultrasound, creating a wishlist/history, (3) a holographic display that renders shoes that can be customized by features in real time, and (4) a connected mirror that can serve as both concierge and payment terminal. With just a little bit of extrapolation, it is clear that these concepts can connect with identity on the blockchain, payments through financial institutions, and mixed reality wearables from high-tech companies. Watch the video for the full effect.