propaganda bots

BIG TECH: When Attention Platforms please the Sovereign and not the User

Where would we be without some cautionary warnings about technology overlords and attention black holes? Since you asked, we'll give you some things to think about. The first is Absher, a web service from the Saudi government that helps men track the location of their female family members. As an all-around government services app, male users can pay parking tickets or renew a driver's license. They can also designate where a woman in their guardianship is allowed to travel -- a practice empowered by local law, culture and religion. The app will notify the man if the woman's passport is scanned at an airport or border check point with a convenient text message. The app has been downloaded over 1 million times on Android devices.

The other example is China's "Xi Study Strong Nation" app, which is the media voice of the Communist Party in a modern format. Users read articles and watch videos on the platform, earning points for such engagement -- say 0.1 points for each item. The app uses intelligence to process behavioral data so that it knows if the user is truly engaging, or just scrolling around. If you fire up the content in the evenings, however, the rewards for engagement double up. This way, readers are incented to exchange relaxation for Party reading. But why do any of this at all as a user, you ask? While we can only rely on the media sources available, those suggest that employment could be predicated on fulfilling a sufficient number of points (e.g., 40 a day) in order to remain in social and political standing. What starts out as a gamified learning experience quickly becomes a social prison. We hypothesize that data about propaganda consumption can also be tied into the country's social credit score, which determines everything from financial product & service access to potential for academic admission. No wonder Reddit's community is creeped out by the recent $300 million investment from Tencent.

It is dangerous to make cultural judgments from a place of ignorance -- and we are but a meek Fintech newsletter. Still, we can sharpen our mental model and draw generalizable conclusions from these cases. In the West, the tech platforms (Facebook, Google, Twitter) are in trouble for selling human attention to the highest bidder. But at least their core function is to use technology in order to increase a user's choice and self-actualization, or one's impression thereof. By sharing photos, shopping on Amazon, or searching for information, we are making personal and empowered decisions -- even if those decisions are within the speed-lanes prescribed to us by a corruptible AI-brained Newsfeed.

In these counter examples, a sovereign has penetrated the attention platform in order to redirect the attention and associated power to itself. These apps are not made to facilitate the choices of humans, but to make stronger the social human constructs of law, power, culture and religion. They extend not the open promise of creativity and self-fulfilment on the Internet, but rather cement into code the existing flawed beehive in which we operate. Putting sovereigns into software -- which unlike humans is ever-present and all-seeing -- is a bad call. In a round-about way, perhaps it is best to leave Facebook and Twitter and Netflix and Amazon alone. Allowing government control into these apps, even if just a bit, is a slippery slope way down the rabbit hole.

73860030-52f2-4ef5-93ce-b62430b1555b[1].png
a9347f46-195a-4b27-b007-eafd0c60dd05[1].jpg

Source: Business Insider (AbsherApple & Google), China Media Project (Little Red Phone), NY Times (Little Red App), Bloomberg (Reddit / Tencent), Netflix & GDPR

2019 FINTECH PREDICTION: Real Autonomous Organizations Take Shape

Source: Images from Pexels,     2019 Keystone Predictions Deck

Source: Images from Pexels, 2019 Keystone Predictions Deck

The last 5 years have seen fundamental innovation in crowdfunding, regulatory technology, the digitization of financial services, blockchain native organizations, and automated propaganda bots to attract human attention. 2018 brought with it sobriety and a back-to-traditional regulatory treatment of financial assets and their structures. In particular, the crypto asset movement (and its crypto-anarchist community construction) has been put into a well-understood, regulated box by most national regulators. While many interesting lego pieces exist, none of them have yet to fit together. Still, regular people have gotten a taste of both the distribution and manufacturing sides of financial mana.

2019 will re-combine these pieces to instantiate functional autonomous organizations that work in a constrained market environment and perform useful services. Unlike the failed experiments of the DAO or BitShares, these new DAOs will have a clear corporate form, a regulatory anchor, and will focus on delivering products and services to regular people, but scaled through machine strategy. The automation of company formation (Stripe Atlas) will combine with the outsourced human/machine assembly line (Invisible Tech) and distributed governance (Aragon) to create companies that scale frighteningly quickly.

Such creatures need a safe environment in which to operate, with a narrow set of functions and constraints. We see labor platforms like 99Designs or Upwork as useful sandoxes to test whether software-based organizations can compete in a human market. Such experiments will require a re-thinking of the tokenized approach, leveraging the micro-economic discoveries but avoiding the need for a poorly adopted crypto wallet or token. Designers will need to reduce friction, not just lump together coding ideas. But the timing and soil for this could be just right.

2018 FINTECH PREDICTION IN REVIEW: Social Selling & Propaganda Bots

Here's what we said would matter in the past year year:

How can financial advisors, insurance agents, bank tellers and other human front office staff compete with bots? How can they compete with Kim Kardashian and kitten GIFs for attention? They can’t — at least not without some automated help. We think that 2018 will see a much fuller implementation of Social Selling, i.e., using social networks like LinkedIn to prospect for business, and that this channel will become plugged into roboadvisors, neobanks and insurtech startups. Further, social selling is all about content marketing by using writing, podcasts and video. To distribute these at scale, we expect the technology behind propaganda bots to find a way into the mainstream economy and become a more acceptable strategy. Call it demand generation.

We were strongly correct in thinking that the social media pipes of LinkedIn, Twitter and Facebook will be used for selling financial products; the claim that these tools will be supported by some of the shadier aspects of propaganda bot networks also came true in particular cases. The second largest crypto currency, Ripple, is associated with a large and active bot and sockpuppet network, which has supported the market value of XRP to be $15 billion, only behind Bitcoin, and in competition for second place with the far more functional Ethereum.

Various social influencers – like DJ Khaled (6 million followers on Instagram) – peddled digital assets during the ICO mania and have faced regulatory fines; Youtube similarly was filled with investment advice content from enthusiasts. We were wrong about the pace at which traditional businesses will do this in the short term, but are still convinced this is a longer term change that will happen with the generational shift in both sales and regulatory roles. People are spending 12 hours a day on media, increasingly on LinkedIn, Youtube, and Twitter, and marketers are well aware. And if you have a LinkedIn account, so are you. 

507d84ef-f330-4840-a2d6-28776712dc3e[2].png

Source: 2018 Keystone Predictions Deck, Twitter visualization of the XRP network by Geoff Goldberg, B2B Media Channels from State of Digital Marketing by Demand Wave

BIG DATA: The Beauty of Global Networks of Data Exhaust

As the human world becomes more digital, our connections and interactions are recorded and shared. We go from knowing 150 people and analyzing a few stories a week to 2 billion people sharing hundreds of millions of stories constantly. But humans still need to understand what's going on underneath. In this entry, we want to highlight how massive, machine scale systems are visualized through mathematical methods to tell new stories. These charts -- giant sprawling data webs like airplane traffic patterns etched onto the globe -- are the future of literacy in the machine age.

In the first example, we borrow two images from Google. The Google Cloud team created a service which grabs the entire Ethereum blockchain, backs it up on Cloud, and makes it easier to analyze. The first image shows the Crypto Kitty universe, with color attached to owner of the contract (kitty whales!) and size of the bubble ranking the quality of the asset. We can certainly imagine this done on regular old financial assets. The second visualization is for transactions: points are wallets and lines are asset movement. You can immediately seen wallet clustering, which shows entities that have more frequent transactions between each other closer together. In this way, one can ferret out exchange wallets or bots. Hey there Bitfinex!

41d3c9e9-d3ba-4d2d-a5b0-63c0c486d880[1].png
27be3e7e-65b3-4410-b73b-13fdc5e49df8[1].png


The second source is a ConsenSys write up on decentralized exchanges, and is truly a spectacular chart. Do yourself a favor and click to zoom in. The dataset comes from IDEX, EtherDelta, Bancor, 0x, OasisDex, Kyber Network, and Airswap Protocol -- today's decentralized exchanges. Each point is a trading pair, the width of the line is number of normalized trades, and the line colors signify the exchange used. You can immediately see the most popular trade contracts, as well as exchanges where trading hops through an intermediate token, rather than through ETH itself. We'd love to see this for traditional FX markets, or maybe all trading period!

11eecc6e-2804-4256-83f6-b6ec740186cd[1].png


The last chart is from Geoff Golberg, who mapped out all Twitter accounts engaged in the Ripple XRP community with the purpose of identifying bots. And yep, the 40,000 point cloud has multiple bot armies across the world used to manufacture opinions and drive social engagement. It takes a robust mathematical approach to visualize this information, and a detailed article written by a human to infer the relationships and their activities within the data network. This is a flavor of future skillsets required to thrive in a machine world.

f006c27c-c6e7-481c-a7cb-2326fb20e091[1].png

Source: Google (Ethereum), ConsenSys (Decentralized Exchanges), Medium (XRP Bots)

BLOCKCHAIN: Scams in Crypto: 20% of ICOs, 5% of Twitter

cecd07b1-c822-4726-b56f-bfbe22f1d762[1].png

Getting a wrap around just how much scamming and fraud there is in the crypto ecosystem is a challenge -- but not impossible. As the industry continues to put up impressive fund-raising figures (with new issues at about 2% of Ethereum market cap per month), just how much of this will become valuable projects? We've written before about how creative destruction is natural for startups, and that failure rates in the mid 90% are a reasonable outcome. We've also pegged hacking of Bitcoin and Ethereum to have been responsible for about 14% of money supply in those pools. But what about outright theft and lies?

Two ideas. First, the WSJ analyzed 1,450 ICOs and found that 271 or 18% of them are just total raw scams. Fake copied white papers, team member photos taken from stock photo websites, nothing behind the project but malfeasance. Yikes. And another version of the same was The North American Securities Administrators Association going after nearly 70 ICO issuers in a coordinated action of regulators across the US and Canada called "Operation Cryptosweep". Which is a totally sweet name, for what is a really regrettable but required clean-up of the crypto ecosystem. A 20% chance to lose your money, for no philosophically meaningful reason, is the wrong price to pay for good financial technology in our opinion.

And second, don't forget the propaganda bot armies. Sure, they can influence elections and spread misinformation, but we didn't expect that they would be used for financial warfare this quickly. The practice in question is copy-cat accounts on Twitter that look like a Twitter influencer claiming to give out free crypto currency, if only you send them money first. This is hacking of the human kind and we monkeys fall for it all the time. As a comparisons: (1) email phishing maxes out at 0.70%, according to Symantec, and (2) bot automation is at approximately 10% of all activity on Twitter. Given that the crypto ecosystem is more prone to Internet memes and bounty programs, we would expect the rate of phishing to skew higher, say up to 5% for crypto-related conversations. So watch where you point that digital wallet.

26e6c603-3c58-4d2f-856f-1234fd850365[1].png

Source: WSJ (18% scams), NASAA (Operation Cryptosweep), Bloomberg (Bot PhishingHacks at 14%), Autonomous NEXT (Failure rates)

CRYPTO: Growth Hacking with Airdrops and Forks

451a5692-2930-4dda-85e9-2061eab95db6[3].png

As we gear up for the next edition of Token Mania, one of the key issues to quantify are token airdrops and forks. While ICOs are still good for fund-raising, they are becoming less democratic as investment moves from crowdfunding towards large private pre-sales. So instead of a community-backed token, companies end up essentially raising a token version of early stage financing from venture capital. Airdrops, however, are a way of driving project growth and adoption without asking users to pay for access, or to prefund development. The model is reversed – the project may already be funded, and the team is distributing value to the community to incentivize adoption.

While there's nothing new about sign-up bonuses (e.g., $100 to open a bank account), this particular version of internet growth-hacking is quite different. First, some ICOs are reserving 5-10% of their raise to distribute back out to the community, compared to 0.50% per ICO advisor, or 1% for ICO law firms. Markets see this as a legitimate incentive because many investors value protocols on a ratio of Market Value to Transactions. This means that the more transactions within a network, the higher the relative price of the token. For example, EOS surged 45% in anticipation of a planned drop. And second, the application of a growth hacking to airdrops can tie "free" tokens to bounty tasks, like following a Twitter account, joining a Telegram group, or downloading a crypto wallet. An example of this is that people who signed up for the Ontology newsletter (project on the NEO blockchain) had received tokens which are now worth nearly $10,000. The biggest enabler of this growth hacking is Earn.com, a recent $120mm+ acquisition by Coinbase and driver of much crypto community theater.

Source: Earn.com

Source: Earn.com

It's hard to find good data, but we were able to parse yourfreecrypto.com (so take this with a grain of salt). You can see in the chart past and planned airdrops by month. The rising tide signals that projects are in the mode of buying community, now that they've raised assets to fund development. Oddly enough, the projects want community before their software is finished -- perhaps to put pressure on exchanges to list the token, or to financially engineer positive sentiment and demand.

Two adjacent issues are worth mentioning – (1) taxation and (2) forks. Airdrops could be interpreted to be income, and taxed as such. You are receiving some value with a cost basis of $0, so watch out. And from a structural perspective, airdrops and forks both resemble dividends in some form. We had predicted 50 Bitcoin forks in 2018, which probably won't be far from the truth. Regulation, or at least economic normalization, of such financial engineering to remove scammy behavior is still desperately needed in our view. Too many opportunists are giving away free magic beans, persuading people those beans will grow, and then walking away with capital gains and no positive impact on the world.

ARTIFICIAL INTELLIGENCE: Uncanny Examples of AI applications

926185e8-74dd-43cf-9002-0a6279faa733[1].png

We want to highlight a few fun and unexpected outcomes of AI technology. The first is a textbook example of how to manipulate public opinion, get celebrities to talk about ICOs and stocks publicly, and power propaganda bots. Just fake a video! In this example, actor Jordan Peele provides a vocal track of his impersonation of Barack Obama, saying various outlandish things. A deep fake AI does the rest.

What does that actually mean? First, hours upon hours of footage of Obama were fed into a machine learning algorithm, which is taught to recognize how sound data correlates with visual data. Next, the process is reversed to generate images instead, with the original algorithm acting as a gatekeeper to assess whether the generated image is good enough to pass for Obama. And with a bit of magic dust for interpolation, we have the uncanny outcome where we can make anyone say anything that we want.

Another odd example is this -- a dog was wired up with various sensors that used machine vision to its daily activity. From this data, researchers were able to isolate the dog's ability to choose on which surfaces to walk. This involves quite a bit of judgment, understanding whether the path is too high or uncomfortable. After this experiment, an AI has a statistical layer that represents a dog's pathfinding in the world based on visual data. Perhaps we'll see this making its way into a Boston dynamics death critter.

How do we translate this back to financial services? The short term answer is this -- anything that humans do in a rote manner, where the task is a result of human intuition of reasoning but has a fairly stable decision process, can be done by machine learning. Full stop. 

2a735a2b-168d-4690-9674-f91f324aac42[1].png

SOCIAL MEDIA: Governance of the Attention Economy

Source: Reddit, Statista

Source: Reddit, Statista

A fascinating piece at Polygon this week takes issue with an aspect of Ready Player One that points to a fundamental question that separates science fiction from our attention economy. In the movie, the protagonist adventures through a virtual reality world, where future society spends the majority of its time. This world has rules and goals, but they are woven into the background. Polygon argues this is highly unrealistic not in its technology, but in its community. Take an existing example, such as VR Chat with its millions of users and a growing online community. What we see in anonymous places like this is an amplification of the edges, extreme opinions and weird behavior becoming louder, and armies of trolls and celebrities emerging.

This has happened repeatedly on the web – from Twitter, to Youtube, to Reddit, to Facebook. Such radicalization can come from either human behavior, or amplification of human intent through propaganda bots. And it is also spilling out in the other direction. Take YouTube. From the thousands of software-created violent and bizarre children’s videos, to celebrity trolls like Logan Paul getting paid millions to act out hijinks for followers, all the way to the tragic shooting at the YouTube headquarters by an erratic personality taking issue with a change in the economic model.

This means that moderation is key. A community with successful moderators is a connected and enjoyable place to be. A community without moderation leans into its edges, and can become hostile and aggressive. See Jack Dorsey and Twitter. But, you know, moderation of an online community is really just regulation, isn’t it? And governance standards for content (or crypto economic activity) are really just government. So in this new wave of technology, all we are doing is re-inventing the same old solutions for the same old human problems -- how to be social animals, how to create successful tribes, how to trade off freedoms and rights. Rights and freedoms in the abstract mean nothing. Only when the right of one person collides with the right of another person (your backyard, our recording drone), do we need intervention to decide how the conflict is resolved.  

As we enter the machine age, the challenge is the scale of what needs to be governed. While humans may successfully moderate human content, they have very little chance of manually moderating the big data tsunami in which we are tossed about. And as augmented reality is layered on top our physical world, expect this issue to get an order of magnitude worse. Thus our new communities, like Facebook, LinkedIn or Amazon, are already governed by artificial intelligences. We may call these things “NewsFeed” or “Recommendations”. But don’t be fooled for a moment. The mathematical selection of content in response to human fashions is the most powerful voice in the world. It shapes opinion, economy and political power. Shouldn’t we at least be allowed to elect our AI overlords? Maybe we can moderate them.

Source :   MIT/Reddit

SourceMIT/Reddit

VIRTUAL REALITY: 12 Million AR/VR Devices via Pop Culture

Despite all the talk of mixed reality hardware leading us towards augmented commerce, it still feels like nobody has a AR/VR headset. Will this be an actual platform shift, like mobile phones, or a dud like 3D films? First, the numbers. Last year, about 8 million headsets shipped to consumers, with 12 million expected in 2018. These include a variety of quite different devices — screenless viewers into which you plug in a phone (good for 360 video, but laggy), stand-alone headsets (VR rendering hardware and software in a single package), tethered headsets (plugged into a desktop for rendering horsepower). We are also on the verge of seeing more augmented reality devices, like Magic Leap and HoloLens, that have semi transparent lenses and render virtual objects in the real world, as well as wireless headsets for full VR.

The developer ecosystem is also moving well along. Google has released its developer kit a while ago, turning Android devices into AR units. It now has 85 apps, of which several enable commerce. See EbayOverstockWayfairIKEA, and the Food Network. Google is also opening up its Maps API to help catalyze the development of location based AR apps (think PokémonGo). Microsoft’s HoloLens has done the same in 2016, targeting industrial applications, like architecture and construction. And Magic Leap is opening up its hardware for developers now, promising a high end augmented reality experience — the least they could do after over $2 billion in private funding. And in the crypto world, projects like Bubbled* are exploring augmented reality land titling, to keep vagrants trying to catch some rendered critter out of your backyard.

It’s hard to catalyze a change in human behavior. If you do it, and then own some dimension of the ecosystem along which you can take economic rents (e.g., hardware or capital or data), the outcome is a multi-billion dollar honeypot. Thus HTC, Facebook, Google, PlayStation and others are all throwing billions into the sacrificial fire. But getting people to change how they pay for things, or what currency they use, or what data they share is immensely hard. 

Which is why, we think, there’s the beginnings of a media content wave that’s meant to normalize mixed reality hardware. See for example the blockbuster film “Ready Player One”, where the main character’s life is dreary in the real world, but full of potential in the virtual one. Or the teen show called “Kiss Me First”, where the main characters struggle with social media, identity and the requisite drama in part through adventure in a virtual world. If iconic cultural experiences tell us that mixed reality is normal and here to stay, well you get it. You might not care, but your kids will tell you to buy it.

SOCIAL MEDIA: Facebook's Propaganda Failure is a Feature, not a Bug

The best thing we've seen on Zuckerberg and Cambridge Analytica is this piece on Slate by Will Oremus. Cambridge Analytica and data scientist Alex Kogan did pull lots of Facebook data out the system and create "psychographic" profiles of users. This means that advertising could be targeted towards particular belief groups, surrounding them with different messages that would lead to behavior change at the margin. This is mass customized propaganda, and it had real impact on the 2016 elections.

But the real takeaways are that (1) Cambridge Analytica wasn't actually that good at its job and was really pretending its software worked, (2) Facebook has always been in the business of monetizing user data, from Farmville to Tinder, and (3) Facebook's current third party data sharing policies no longer allow companies like Cambridge Analytica to grab the data to do AI-based advertising, because Facebook does the work of mass-targeting itself. There's no need for a malicious third party -- just use the native Facebook tools.

This is what happens when we put no value on human data and put it up for rent. Machines can use that data to manufacture preferences and behaviors at scale. This is not a surprise or a malfunction -- quantitative advertising technology has been a massive venture investment sector for years, seeing $3 billion in funding in 2011. Since then, GAFA has swallowed up the market. And the technology of this sector, in particular artificial intelligence for profiling customers through unstructured information, has spread everywhere, including financial services. See for example the $30 million investment into Digital Reasoning by BNP Paribas, Barclays, Goldman Sachs, with prior investors being Square Capital Nasdaq and others. The product processes audio, text and voice data overlayed on top of internal communications to prevent fraud or add customer insights.

0b15a2a7-cf3e-48ab-bfa8-be12fcf7809d[1].jpg

What symptoms like this mean in the long run is that we don't even need a Facebook data leak to be trapped in the AI bubble. Our interactions with each other are now nearly all digital, which means they can be used to impute a personality and a profile that we may not have ever shared. And AI hooks live everywhere -- from media, to finance, to commerce. Mass customization of our products and information is inevitable, and Facebook is not special in empowering this trend. Rather, we need a new literacy to live in an AI-first world.

REGULATION: Wells Fargo Forbidden From Growing by Federal Reserve

Source: CNN Money

Source: CNN Money

In the legal tradition, civil courts can do one of two things: (a) make a party pay damages for injury resulting from a particular action by that party, or (b) prevent the party from taking an action in the first place through an injunction. Meaning, they can make you pay for your mistakes with money, or put you in time-out. The Federal Reserve has just put the entirety of Wells Fargo in time out by forbidding it from growing until it fixes the mistakes that led to its scandals (like opening 2 million fake accounts, aggressive sales tactics, etc). Wells has already paid $185 million in fines, so this is a cherry on top. The firm can add no more assets over the level it had at end of 2017.

This move is a potent reminder of sovereign power, and how it could be effectively used. All this noise about scams, fraud, crypto, and Ponzi schemes -- all this can hit a wall. Every exchange can be shut down. Every bank can be unlicensed. Sovereigns have teeth, and they should not be afraid to use them (for the right reasons of course). This is far easier to do with well regulated centralized entities, like one of the world's largest public banks; decentralized crypto may survive even such an attack. Other examples of sovereign power can be seen in the transformative European legislation of PSD2GDPR and MiFID II. These regulations force open bank data into accessible APIs that support fintech, create a personal right to be forgotten that forces a company holding your data to delete it, and separate investment research from trading to prevent inducements. 

Similar force could be used to deal with propaganda bots and the overreach of the big tech companies. We know that GAFA are dealing with millions of fake accounts (not unlike Wells). But these accounts manipulate information, public opinion, commercial outcomes and financial investment. From this point of view, Facebook's block of crypto-related ads is self protection, trying to prevent the system from being coopted for financial manipulation and regulatory response. See how the New York state Attorney General is going after the firm that manufactured fake accounts. We can also look at the healthcare alliance between Amazon, JP Morgan and Berkshire in this light -- a way to start remedying social unrest resulting from automation and increasing concentration of wealth, a first step to universal income.

One solution is fairly simple. Until Facebook, Google/Youtube and Twitter get their social news problems under control, they could be restricted from adding new accounts over the level of 2017 year end. Now that would be one way to fix the attention economy.
 

ARTIFICIAL INTELLIGENCE: Machine Vision Calamities

Source: Devumi bot retweet sales

Source: Devumi bot retweet sales

Let's look at how increasing computing power and algorithm efficiency are leading to some pretty wacky technology in the realm of computer vision. The building blocks are as follows. Neural networks can be trained on large data sets of objects to recognize those objects. They run on video cards (GPUs) and power everything from tagging cat photos to Tesla's self-driving cars. The more GPUs, the more things you can recognize, and the better your data and algorithm efficiency, the more accurate your recognition. 

So here's the example -- Amazon and its magic store, Amazon Go. The company has been testing a check-out free shopping experience for a few years, and the acquisition of Whole Foods has only encouraged speculation about the future of food retail. New information has come out about how the technology works. First, a shopper scans an identifier on their phone when entering the store. From that moment on, the hundreds of video cameras on the ceiling watching all the activity in the store track every single shopper and every single product on video. To do this successfully, not only do you need gazillions of hours of footage (i.e., what Amazon is in fact doing), but a massive cloud infrastructure to process the machine vision demands in real time. Good thing there's AWS!

The same neural network that can recognize images can also hallucinate them. Generative neural networks can manufacture images of a type, where the type is their source data set. And if you put an editor on top of that, like an adversary, you can manufacture pretty accurate renditions of whatever it is you want.

Thus, deepfakes. In their current NSFW form (and this is how the trend is being reported), deep fakes use machine vision to swap out the faces of celebrities onto adult entertainment. But that's just the beginning. Using a free desktop app called FakeApp, a derivative of the many mobile face-swap apps, a user can masterfully replace one speaker's face with that of another. And the effects can be good enough to look better than a multi-million dollar 3D rendering by the best Hollywood studios.

Samantha Cole at Mortherboard, which broke this article, goes on to say -- "An incredibly easy-to-use application for DIY fake videos—of sex and revenge porn, but also political speeches and whatever else you want—that moves and improves at this pace could have society-changing impacts in the ways we consume media. The combination of powerful, open-source neural network research, our rapidly eroding ability to discern truth from fake news, and the way we spread news through social media has set us up for serious consequences."

Yeah, it's not great. Especially when such messages can be validated for peanuts on social networks using cheap bot armies. According to the New York times, the going rate for 25,000 fairly active Twitter bots is $225. Want to know where the profile descriptions and pictures come from that make these bots look like real users? Stolen identities from humans. 

Source: Top frame shows rendered Carrie Fischer in Star Wars, bottom one uses FakeApp

Source: Top frame shows rendered Carrie Fischer in Star Wars, bottom one uses FakeApp

SOCIAL MEDIA: World's Largest Botnet Born from Minecraft

Source: Minecraft

Source: Minecraft

This is a lego piece for the future. On the Internet (we're there right now!), a distributed denial-of-service attack ("DDoS") is when a group of computers access a server so many times that traffic spikes and the server crashes, taking down whatever it is hosting. So for example, if you don't like the NY Times, just overwhelm it with robots and bring the site offline. These robots, collectively a botnet, don't have to be particularly good computers -- one could for example hack into thousands of baby monitors over WiFi and then point them at a target.

In 2016, a tremendously powerful botnet attacked the internet infrastructure of the United States, like never before. It used 600,000 Internet of Things devices. Where did this weapon come from? The answer is the video game Minecraft. In 2014, the virtual sandbox had 100 million registered players and a GDP of $400 million. Part of these economics is hosting Minecraft servers for local communities, and the corrollary of that is that executing a DDoS attack against a competitor makes you a modern-day Minecraft mafia monopoly. The 21-year old creators of this infamous botnet built it to snipe out other video game tycoons and make more money on their Minecraft servers. Later, they used the same botnet to defraud advertisers (selling hundreds of thousands of clicks and traffic that came from robots, not humans).

At some point, the creators open sourced the software and it spread through the dark web. That means any black hat hacker can get the code, change it up, and try to create its own infection of IoT devices. We know that, for example, North Korea is pretty good at cyber attacks and is now hacking crypto currency infrastructure. The links between 21-year old computer savants, video games, Internet money, and international geopolitical power struggles are here to stay. Which world is more powerful?

SOCIAL MEDIA: Tinder FemBots Swing Election

Source: The Times

Source: The Times

 One theme that we have been tracking since the 2016 American election is the use of software agents -- from chatbots to botnets across Facebook and Twitter -- in forming mass audience opinion and influencing real-world political events and forming political power. Put simply, software makes Kings. And in the case of the British elections, it crowned Queens.

130 women volunteers surrendered their Tinder (mobile dating app) profiles to chatbot software built by female developers and journalists. The profiles swiped right in geographic locations where men were most likely to be undecided or demoralized voters, and initiated flirty conversations with the intent to persuade them to vote Labor. At some point, the chatbot transitioned the conversation to a human, doing the busy work of finding targets and giving over the emotional labor back to people. We know the results.

Whether or not the Tinderbots really pushed the election over the edge, this is an incredible development for the Attention Economy. Forget cyber-hacking of your personal data. How about social hacking of the populace en masse? We are particularly worried about this as it relates to the ICO market / bubble. While there are certainly many worthwhile technical projects being developed, much of the activity is happening in an unregulated environment. Incorrect prices can result from massive cryptocurrency supply (Bitcoin whales parking gains) and the technical expertise of the cryptocurrency community. It takes very little to build software that tilts human opinion on social networks. 

SOCIAL MEDIA: Army of Chatbots Hack Collective Thought

Source: Data for Democracy, Jonathan Morgan

Source: Data for Democracy, Jonathan Morgan

 Regardless of your politics, this is an incredibly important article to read and understand. What today influences the public arena has the potential, and is likely already being used, to influence the financial markets and trading. Here are some numbers leading up to the election: (1) a network of over 1,000 bots posted identical political messages in conservative Twitter communities, (2) a network of 30,000 bots were posting identical messages on Trump's Facebook page. Additionally, "sockpuppet" accounts (where a person adopts a false persona to drive a particular message) were used to create an illusion of consensus around news or messaging. The result is pockets of online communities which form quickly rising information bubbles.

As finance professionals, we don't often have to think about Botnets and automated propaganda. Don't the SEC, FCA and other regulators make sure that financial promotions are clean and there is no market manipulation? Yet, in an age where social influence is all online views and clicks, and tech strategies evolve daily, we must understand how opinions are manufactured. One such area of concern is Initial Coin Offerings, which have been an alternative funding source for blockchain companies that want to side-step regulated crowd-funding or venture capital. Next time you listen to the social chatter about companies, consider carefully the source.