Towards a Certifiably Robust Defense for Multi-label Classifiers Against Adversarial Patches
2024-2-9 09:30:42 Author: www.ndss-symposium.org(查看原文) 阅读量:4 收藏

Find the papers accepted for presentation at the upcoming Artificial Intelligence System with Confidential Computing (AISCC 2024) Workshop. Each paper will be allotted 8 minutes for presentation. The workshop will follow the sequence listed below for presentations. We encourage attendees to review the schedule in advance to ensure they don’t miss any presentations of interest.

Enhancing Security Event Detection on Twitter with Graph-based Tweet Embedding
Jian Cui (Indiana University Bloomington)

Differentially Private Dataset Condensation
Tianhang Zheng (University of Missouri-Kansas City), Baochun Li (University of Toronto)

Heterogeneous Graph Pre-training Based Model for Secure and Efficient Prediction of Default Risk Propagation among Bond Issuers
Xurui Li (Fudan University), Xin Shan (Bank of Shanghai), Wenhao Yin( Shanghai Saic Finance Co., Ltd)

Towards a Certifiably Robust Defense for Multi-label Classifiers Against Adversarial Patches
Dennis Jacob, Chong Xiang, Prateek Mittal  (Princeton University)

Auditing Artist Style Pirate in Text-to-image Generation Models
Linkang Du (Zhejiang University), Zheng Zhu (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (Stanford)

Opinion Manipulation of Controversial Topics Based on Adversarial Ranking Attacks
Zhuo Chen, Jiawei Liu, Haotan Liu (Wuhan University)

Strengthening Privacy in Robust Federated Learning through Secure Aggregation
Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Aligning Confidential Computing with Cloud-native ML Platforms
Angelo Ruocco,  Chris Porter, Claudio Carvalho, Daniele Buono, Derren Dunn, Hubertus Franke, James Bottomley, Marcio Silva, Mengmei Ye, Niteesh Dubey, Tobin Feldman-Fitzthum (IBM Research)

Exploring the Influence of Prompts in LLMs for Security-Related Tasks
Weiheng Bai (University of Minnesota),  Qiushi Wu (IBM Research), Kefu Wu, Kangjie Lu (University of Minnesota)

Facilitating Threat Modeling by Leveraging Large Language Models
Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Benchmarking transferable adversarial attacks
Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning
Gelei Deng, Yi Liu (Nanyang Technological University), Yuekang Li (The University of New South Wales), Wang Kailong(Huazhong University of Science and Technology), Tianwei Zhang, Yang Liu (Nanyang Technological University)


文章来源: https://www.ndss-symposium.org/ndss2024/co-located-events/aiscc/accepted-papers/#Towards%20a%20Certifiably%20Robust%20Defense%20for%20Multi-label%20Classifiers%20Against%20Adversarial%20Patches
如有侵权请联系:admin#unsafe.sh