BIG TECH: YouTube, Facebook and NVIDIA powering hyper-realistic human avatars

The digitization of the human animal continues unopposed, with symptoms all over. Chinese firm Megvii, maker of software Face++ that has catalyzed 5,000 arrests since 2016 by the Ministry of Public Security, is looking for an $800 million IPO. The other champion of public/private surveillance, Facebook, is working a virtual reality angle. The company is improving the technology used to model rendered avatars of human faces, which can then be displayed across virtual environments. Using multi-camera rigs and hours of facial movement footage, Facebook is building neural networks that learn how to translate realistic facial muscle movement into models. The Wired article linked below is worth exploring for the videos alone, and the uncannily realistic motion these animation possess.

One of our recurring points is that frontier technologies -- AI, AR/VR, blockchain, and IoT -- appear disparate now, but are intricately connected. Take for example the new feature from Google called YouTube Stories. Similar to SnapChat and Instagram, video creators can apply 3D augmented reality overlays to their faces. While this technology looks like virtual reality rendering, it is primarily a machine vision (i.e., AI) problem to anchor rendered objects to a human face realistically. To do this Google provides a developer library called ARCore, not to be confused with Apple's ARkit. Human video avatars can be further extended and customized with code -- the twenty first century version of personal branding.

Another take on the same issue comes from generative adversarial neural networks (GANs). We've discussed before how hyper-realistic images and videos can be faked by a model where one algorithm creates images and another accepts or rejects them as sufficiently realistic, with repeated evolutionary turns at this problem. Highlighted below is a recent software release from NVIDIA, where a drawing of simple shapes and lines is rendered by a GAN into what appears to be a hyper-realistic photo of a landscape. We can imagine a similar approach being applied to the output generated by Facebook's avatars, which still border on creepy, to ground the outcome in reality. Little details, like a reflection of a cloud on water, are hallucinated by GANs automatically, based on massive underlying visual data. Expect these digital worlds to become increasingly indistinguishable from reality, and to spend way more time living in them for the years to come.

c0552a77-9b22-4f32-8c09-36f2ca71f5ac.jpg
water.PNG

Source: SCMP (Face++), Wired (Facebook Avatars), NVIDIA (GAN drawing)