We’re excited to announce the launch of a new competition focusing on the security and privacy of machine learning (ML) systems. Machine learning has already become a key enabler in many products and services, and this trend is likely to continue. It is therefore critical to understand the security and privacy guarantees provided by state-of-the-art ML algorithms – indeed this is one of Microsoft’s Responsible AI Principles.
Fundamentally, ML models need data on which they can be trained. This training data can be drawn from a variety of sources, including both public and non-public data. In many domains, ML models achieve better performance if they are trained on specialized or domain-specific data. This specialized data is often not directly available to the users of the model (e.g. to protect privacy of the data contributors or intellectual property of the model owner). Ideally, having access to an ML model should not reveal which individual data records were used for training the model. However, recent work on membership inference has demonstrated that this is not always the case.
Membership inference is a widely-studied class of threats against ML models. Given access to a model, the goal is to infer whether a given data record was used to train that model. Depending on the nature of training data, a successful membership inference attack could have serious negative consequences. For example, a model for predicting the next word in a sentence might be trained on a large dataset of emails and documents from a company. If the model were vulnerable to membership inference, any user of the model could guess candidate sentences and use the model to test whether these were used for training the model, thus indicating that they appeared in the company’s emails or documents. Similarly, a model for classifying medical images might be trained on a dataset of real images from patients at a specific hospital. A successful membership inference attack could allow users of the model to test whether a specific person’s images were included in the training dataset, and thus learn that they were a patient at that hospital.
Importantly, membership inference itself may not be the attacker’s final goal. For example, the attacker may actually want to infer sensitive attributes about individual training data records (attribute inference) or even reconstruct records from the training data (reconstruction attacks). Notice though that in these attacks, the attacker is attempting to learn more information about the training data than in membership inference, where they only need to infer a single bit (member or non-member). Therefore, if we can show that a particular model is resilient against membership inference, this is a strong indication that the model is also resilient against these other more devastating attacks.
Several different types of membership inference attacks of varying complexity have been demonstrated in the scientific literature. For example, in a simple case, the model might have overfitted to its training data, so that it outputs higher confidence predictions when queried on training records than when queried on records that the model has not seen during training. Recognizing this, an attacker could simply query the model with the records of interest, establish a threshold on the model’s confidence and infer that outputs with confidence above the threshold are likely members of the training data. In this setting, the attacker only needs the ability to query the model with specific inputs and observe the output. On the other hand, the attacker may have access to the internals of the model e.g., because the model was deployed to edge devices, which might enable even more sophisticated attack strategies.
MICO is a public competition that aims to bring together and compare state-of-the-art techniques for membership inference. The competition consists of four separate tasks: membership inference against classification models for images, text, and tabular data, as well as a special Differential Privacy (DP) distinguisher category spanning all 3 domains . For each task, we have trained 600 neural network models on different splits of a public dataset. For each model, we provide a set of challenge points drawn from the same dataset. Exactly half of the challenge points are members (i.e. they were used for training the model) and the other half are non-members. The goal for participants is to determine which of these challenge points are members and which are non-members. Participants have full access to all the models, which allows them to make unlimited arbitrary queries to each model and to inspect the parameters of the models. This represents the strongest possible attacker capabilities.
All our models were trained on widely used public datasets, so there is no risk to any private or personal data. This competition has been reviewed in accordance with Microsoft’s open source and responsible AI guidelines.
Please visit the main MICO competition page on GitHub. From there you will find links to the four different tasks. These are hosted on the CodaLab platform, which we use for processing submissions and keeping track of scores. The GitHub repository also contains a “starting kit” notebook for each task, which demonstrates how to download the competition data , run a basic membership inference attack, and submit your results on CodaLab.
To make this competition accessible to the widest possible audience, each task will be scored separately. This means that you can participate in as many or as few tasks as you like, without affecting your performance on the scoreboard.
Although there is a significant body of scientific literature describing various membership inference attacks (and defenses), there is to-date no common benchmark for evaluating and comparing these different techniques. One of our aims in this competition is to provide this benchmark dataset. This is a non-trivial undertaking as our dataset consists of 2,400 trained models, totaling more than 400 GB in size, with an estimated training time of 600 GPU hours. We are fortunate to have the resources to create such a dataset, so our hope is that this will benefit the research community, even beyond this competition. After this competition has concluded, we plan to release the entire dataset, along with the challenge points labels and training scripts, for anyone to use.
More generally, we believe that public competitions, such as MICO, have an important role to play in defining best practices and even future standards for digital privacy. Public competitions are already well established in various fields. For example, organizations such as NIST use them in the evaluation and standardization of cryptographic algorithms . In machine learning, there is a thriving tradition of public competitions to advance the state-of-the-art model performance on different tasks and datasets. We see similar value in using competitions to advance the science of trustworthy machine learning. Having a common benchmark for evaluating attacks is the first step towards this goal, and the second is to bring together, compare, and discuss the state-of-the-art approaches in this field. For these reasons we welcome and encourage you to participate in MICO!
MICO is organized by Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Béguelin, and Andrew Paverd from Microsoft, and Ana-Maria Cretu from Imperial College London.