Adversarial learning has gained significant attention in various studies due to the success of deep neural networks. However, existing adversarial attacks in multi-label learning only focus on visual imperceptibility and overlook the perceptible issue related to measures such as Precision@$k$ and mAP@$k$. In other words, when a well-trained multi-label classifier performs poorly on certain samples, it is evident to the victim that this decline is due to an attack rather than a fault in the model itself. Consequently, an effective multi-labeling adversarial attack should not only deceive visual perception but also evade measure monitoring. This paper introduces the concept of measure imperceptibility and proposes a novel loss function to generate adversarial perturbations that achieve both visual and measure imperceptibility. Additionally, an efficient algorithm with a convex objective is developed to optimize this objective. The superiority of our proposed method in attacking top-$k$ multi-label systems is demonstrated through extensive experiments on benchmark datasets like PASCAL VOC 2012, MS COCO, and NUS WIDE.
Unveiling Imperceptible Adversarial Perturbations for Top-$k$ Multi-Label Learning in the Presence of Unreliable Measures. (arXiv:2309.00007v1 [cs.CV])
by instadatahelp | Sep 4, 2023 | AI Blogs