We were awestruck by two projects. The first allows sketches of objects to become rendered images using generative neural networks. We've shared similar versions of this idea -- from Google's open source library of 3D rendered models to 3D gestures that map onto a space of virtual objects -- but this particular application shows how simple it is to go from concept to realistic (ish) environment. Yes, it's still ugly and messy, but for how long?  

The second project does an even more impressive trick. It takes the visual environments rendered in the 3D bubbles (or "360 video") of Google Maps and generates background sound for the environment. Note that this isn't the actual recorded sound, but a neural network hallucinated auditory experience that is correlated to the image mathematically. Listen to the video for full effect.

The melding of physical and digital spaces requires steps like this to become scalable and repeatable. We believe that once this type of technology is polished around the edges, augmented reality experiences and commerce will become profound.