1
0
A team of researchers at University of Maryland, College Park, working with Facebook AI, have developed a real-life "invisibility cloak:" a sweater that renders you a ghost to common person-detection machine learning models.
"This paper studies the art and science of creating adversarial attacks on object detectors," the team explains of its work. "Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than detectors which localize objects within an image. Detectors work by considering thousands of 'priors' (potential bounding boxes) within the image with different locations, sizes, and aspect ratios. To fool an object detector, an adversarial example must fool every prior in the image, which is much more difficult than fooling the single output of a classifier."
More difficult, certainly, but as the researchers have proven not impossible: as part of a broader investigation into adversarial attacks on detectors, the team succeeded in creating a piece of clothing, which had the unusual effect of making its wearer entirely invisible to a person detection model.
"This stylish pullover is a great way to stay warm this winter," the team writes, "whether in the office or on-the-go. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. In [our] demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective."
Initially, the team's work focused on simulated attacks: generating an "adversarial pattern," which could be applied to detected objects within a given image to prevent the model from recognizing them. The key was in the creation of an "universal adversarial patch:" a single pattern that could be applied over any object to hide it from the model. While it's easy to swap patterns out in the simulation, it's harder in the real world — especially when you've printed the pattern onto a sweater.