neural network

ARTIFICIAL INTELLIGENCE: OpenAI uses a game of Hide and Seek to train the next generation of sophisticated AIs

Without diving down the slippery slope argument of the evolution of biological organisms, we know for a fact that both natural selection and competition play a massive part in the development of complex life forms – as well as their respective intelligence -- in existence today. It is this that a team at the San Francisco-based for-profit AI research lab OpenAI are trying to replicate in a virtual world with the aim of creating a more sophisticated AI.

The experiment centers around two existing ideas: (1) multi-agent learning – essentially pitting multiple algorithms against each other to provoke emergent behaviors through inherent competition and coordination (similar to an ant colony). (2) Reinforcement learning – the most well-known, yet time and resource intensive method of training an AI via trial and error (similar to teaching a child how to ride a bicycle). The latter was the method used initially to train OpenAI’s ‘Dota 2’ bot OpenAI Five, which presumably has played 180 years’ worth of the multiplayer online video game against itself and its past self every single day. This was not all in vain, as earlier this year OpenAI Five won 7,215 games of Dota2 against human players from around the globe, ending up with an astounding victory rate of 99.4% overall.

Lets get back to the experiment shall we? Through playing a simple game of hide and seek hundreds of millions of times, two opposing teams of AI agents developed complex hiding and seeking strategies that involved tool use and collaboration. The development of such complex strategies offers insight into OpenAI’s dominant research strategy: to dramatically scale existing AI techniques to see what properties emerge. Such properties/complex strategies evolved from hiders learning how to use boxes to block exits and barricade themselves inside rooms, to eventually learning how to exploit glitches in their environment, such as getting rid of ramps for good by shoving them through walls at a certain angle, or seekers surfing boxes to gain higher ground to catch the hiders. It is important to note that we are talking about algorithms learning from one another to autonomously develop strategies to compete and cooperate without the use of human interference. Pretty mind-blowing if you ask us.

This is interesting because human-relevant strategies and skills can emerge from multi-agent competition and standard reinforcement learning algorithms at scale. These results inspire confidence that in a more open-ended and diverse environment such as in financial markets, multi-agent dynamics could lead to extremely complex and human-relevant behavior, as well as potentially solving problems that humans don't yet know how to. We have found that the total impact of AI implementations across financial sectors is $1 trillion by 2030, a 22% traditional cost reduction. Breaking this down, the potential cost exposure consists of $490 billion in front office (distribution), $350 billion in middle office, $200 billion in back office (manufacturing). To learn more, or get access to the report (click here).

sadasds.jpg
jkjhkjkhjkh.PNG
ghjghjhj.PNG
werewr.PNG

ARTIFICIAL INTELLIGENCE: Synthesia prove that not all deep fakes are malicious, but for those that are, is Blockchain the answer to spotting them?

Last week we touched on how convolutional neural networks can be easily duped using nothing more than a computer-generated "patch" applied to a piece of cardboard (here). This week we want to keep the theme of neural networks alive, only this time addressing the fascinating topic of deep fakes. We have discussed this before (here), touching on how hyper-realistic media formats, such as images and videos, can be faked by a model where one algorithm creates images and another accepts or rejects them as sufficiently realistic, with repeated evolutionary turns at this problem. These algorithms are known as generative adversarial neural networks (GANs). Initially GANs were used in jest to make celebrities and politicians say and do things they never (here), over time, however, their sophistication has prompted more malicious use cases. Evidence of such malicious intent is reportedly coming from China in which GANs are used to manipulate satellite images of earth and/or provide strategic insight to manipulate the Chinese landscape to confuse the image processing capabilities of adversarial government GANs. Think about it, GANs, much like in our cardboard patch example, can be fooled to believe that a bridge crosses an important river at a specific point. This, from military perspective, could lead to unforeseen risk exposure to human lives, similarly so, in the context of open source data used by software to navigate autonomous vehicles across a landscape. Such malicious use cases of GANs have resulted in the concern of government entities such as The US Office of the Director of National Intelligence who explicitly noted deep fakes in the latest Threat Assessment Report (here). China has gone one step further, recently announcing a draft amendment to its Civil Code Personality Rights to reflect an outright ban on deep fake AI face swapping techniques. Currently, GANs dedicated to counteracting deep fakes are purely reactionary to those dedicated to creating them, but we are seeing novel solutions harnessing blockchain technology come from the likes of Amber - who protect the integrity of the image/video data via "fingerprinting" -- a sequenced cryptographic technique applied to bits of data associated with a single frame/image, which flags any manipulation to the original file.

But let's end this on a good note shall we. An AI-driven video production company called Synthesia used GANs to "internationalize" a message delivered by football icon David Beckham to raise awareness around the Malaria Must Die initiative. Synthesia's GANs were trained on Beckham's face so that 9 different malaria survivors could deliver their message through his avatar in their mother tongue. The resultant campaign has over 400 million impressions globally, and provides insight into the evolution of digital video marketing, corporate communications, and advertising which leverages GANs to reduce production costs and improve engagement.

deepfakes.PNG
amber.PNG
tesla ridesharing app.PNG

Source: arxiv.org (Deep Video Portraits Report), AmberVideo, Malaria Must Die (via Youtube)

ARTIFICIAL INTELLIGENCE: Cornell students break convolutional neural networks using cardboard

Earlier this year we touched on how the the digitization of the human animal continues unopposed, with symptoms all over. China is a great example of a sovereign infatuated with this more than any other. Harnessing sophisticated machine vision software and swarms of CCTV cameras to strengthen the sovereign-imposed social human constructs of law, power, culture and religion. Leveraging apps to do its dirty work, such as Chinese firm Megvii, maker of software Face++ that has catalyzed 5,000 arrests since 2016 by the Ministry of Public Security. Pretty scary stuff. But, as with any software, there are always ways to break it, and it seems as though the folks over at Cornell University have figured out a creative way to deceive a convolutional neural network. Using computer-generated "patches" that can be applied to an object in real-life video footage or still frame photographs to fool automated detectors and classifiers. The main use case is to generate a patch that is able to successfully hide a person from a person detector i.e., an attempt to circumvent surveillance systems using a piece of uniquely printed cardboard which faces the camera and covers some aspect of the subject's body. The accuracy of machine vision stems from the software's ability to break the image up into various filters and pixels, comparing it to thousands of digested images, and using statistics to generate a probabilistic classification of what is presented in that image. Given this, it is clear why such a rudimentary solution could fool such sophisticated neural networks. Your move China. (READ MORE)

f938b80d-c28a-4237-b2b9-eeb9031d6eaa.png
c537769f-d774-4053-9627-a404433bb4ce.png
e1a0e6ad-8d35-4912-a1b2-53ad6ecd9d30.png

ARTIFICIAL INTELLIGENCE: World Building AI

520be1a1-19e9-4ee3-8ec5-22f3c4a1922d[1].gif

We were awestruck by two projects. The first allows sketches of objects to become rendered images using generative neural networks. We've shared similar versions of this idea -- from Google's open source library of 3D rendered models to 3D gestures that map onto a space of virtual objects -- but this particular application shows how simple it is to go from concept to realistic (ish) environment. Yes, it's still ugly and messy, but for how long?  

The second project does an even more impressive trick. It takes the visual environments rendered in the 3D bubbles (or "360 video") of Google Maps and generates background sound for the environment. Note that this isn't the actual recorded sound, but a neural network hallucinated auditory experience that is correlated to the image mathematically. Listen to the video for full effect.

The melding of physical and digital spaces requires steps like this to become scalable and repeatable. We believe that once this type of technology is polished around the edges, augmented reality experiences and commerce will become profound.

ARTIFICIAL INTELLIGENCE: Neural Networks Managing Money at Man Group

Source:  Soul Machines

Source: Soul Machines

We still need humans to figure out how to value new assets like crypto tokens. Or do we? In what seems like an incredible story, $96 billion asset manager Man Group outlined exactly how it is already using artificial intelligence to help trade its portfolios at scale (on some products, not all). To quote directly: "By 2015 artificial intelligence was contributing roughly half the profits in one of Man’s biggest funds, the AHL Dimension Programme that now manages $5.1 billion, even though AI had control over only a small proportion of overall assets." The firm has since decided to use AI as a cornerstone across trading and investment selection, running neural networks on massive data sets in both supervised and unsupervised learning approaches. This requires a big infrastructure: terabytes of data worth of financial information, weather forecasts and global shipping schedules on specialized computers running deep learning software.

Investment management product manufacturing is a particularly thorny problem for AI. Unlike computer vision (concerned with finding a cat photo in a sea of dog photos) or even lending/insurance underwriting (allowing new data to proxy for risk), figuring out what variables to solve around or even what data to use is much more nebulous. As the article describes, data is noisy and outcomes are uncertain. Yet we are likely to see more of this type of machine intelligence, not less. For example, Wells Fargo is augmenting its equity research analysts with a AI bot of their own. Earnings are a narrower problem and MiFID II will push prices of humans down.

How will we visualize these artificial intelligences? While they live inside voice interfaces in the current platforms, that may not be sufficient to actually trust Man Group or Wells Fargo's automated investment philosophy. Even Millennials still like to have a human face on their roboadvisor. Perhaps something like this smart hologram from VNTANA or this virtually rendered baby from Soul Machines? These AIs will need to manufacture some empathy before selling us mutual funds!