WIP: Towards a Certifiably Robust Defense for Multi-label Classifiers Against Adversarial Patches
2024-2-9 09:30:42 Author: www.ndss-symposium.org(查看原文) 阅读量:4 收藏

Dennis Jacob, Chong Xiang, Prateek Mittal (Princeton University)

The advent of deep learning has brought about vast improvements to computer vision systems and facilitated the development of self-driving vehicles. Nevertheless, these models have been found to be susceptible to adversarial attacks. Of particular importance to the research community are patch attacks, which have been found to be realizable in the physical world. While certifiable defenses against patch attacks have been developed for tasks such as single-label classification, there does not exist a defense for multi-label classification. In this work, we propose such a defense called Multi-Label PatchCleanser, an extension of the current state-of-the-art (SOTA) method for single-label classification. We find that our approach can achieve non-trivial robustness on the MSCOCO 2014 validation dataset while maintaining high clean performance. Additionally, we leverage a key constraint between patch and object locations to develop a novel procedure and improve upon baseline robust performance.

View More Papers

Security-Performance Tradeoff in DAG-based Proof-of-Work Blockchain Protocols

Shichen Wu (1. School of Cyber Science and Technology, Shandong University 2. Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education), Puwen Wei (1. School of Cyber Science and Technology, Shandong University 2. Quancheng Laboratory 3. Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education), Ren Zhang (Cryptape Co. Ltd. and…

Read More

Acoustic Keystroke Leakage on Smart Televisions

Tejas Kannan (University of Chicago), Synthia Qia Wang (University of Chicago), Max Sunog (University of Chicago), Abraham Bueno de Mesquita (University of Chicago Laboratory Schools), Nick Feamster (University of Chicago), Henry Hoffmann (University of Chicago)

Read More


文章来源: https://www.ndss-symposium.org/ndss-paper/auto-draft-533/
如有侵权请联系:admin#unsafe.sh