ARTIFICIAL INTELLIGENCE: Cornell students break convolutional neural networks using cardboard

Earlier this year we touched on how the the digitization of the human animal continues unopposed, with symptoms all over. China is a great example of a sovereign infatuated with this more than any other. Harnessing sophisticated machine vision software and swarms of CCTV cameras to strengthen the sovereign-imposed social human constructs of law, power, culture and religion. Leveraging apps to do its dirty work, such as Chinese firm Megvii, maker of software Face++ that has catalyzed 5,000 arrests since 2016 by the Ministry of Public Security. Pretty scary stuff. But, as with any software, there are always ways to break it, and it seems as though the folks over at Cornell University have figured out a creative way to deceive a convolutional neural network. Using computer-generated "patches" that can be applied to an object in real-life video footage or still frame photographs to fool automated detectors and classifiers. The main use case is to generate a patch that is able to successfully hide a person from a person detector i.e., an attempt to circumvent surveillance systems using a piece of uniquely printed cardboard which faces the camera and covers some aspect of the subject's body. The accuracy of machine vision stems from the software's ability to break the image up into various filters and pixels, comparing it to thousands of digested images, and using statistics to generate a probabilistic classification of what is presented in that image. Given this, it is clear why such a rudimentary solution could fool such sophisticated neural networks. Your move China. (READ MORE)

f938b80d-c28a-4237-b2b9-eeb9031d6eaa.png
c537769f-d774-4053-9627-a404433bb4ce.png
e1a0e6ad-8d35-4912-a1b2-53ad6ecd9d30.png