Christopher A. Choquette-Choo

I am a researcher in the CleverHans Lab at the Vector Institute, where I am researching new attacks and defenses in adversarial machine learning. I'll be joining Google as an AI Resident on the Mobile Devices and On-Device models team, which includes federated learning and differential privacy.

I've worked on deep learning projects, namely DualDNN while I was at Intel Corp., differential privacy in collaboration with Google's Tensorflow/Privacy, and AutoML while I was at Georgian Partners LP. Now, I am researching with Professor Nicolas Papernot on adversarial machine learning. I graduated with honors from Engineering Science, with a major in Robotics, at the University of Toronto, where I had a full scholarship for leadership and academic achievement.

profile photo
Research

I'm interested in machine learning and its applications, including adversarial ML, data privacy, deep learning, and natural language processing.

Label-Only Membership Inference Attacks
Christopher Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot
arXiv pre-print 2020

We show that confidence-masking -- defensive obfuscation of confidence-vectors -- is not a viable defense to Membership Inference by showing that label-only attacks can bypass this defense and match typical confidence-vector attacks. We further show, in an extensive evaluation of defenses, that Differential Privacy can defend against average- and worse-case Membership Inference attacks.

Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia, Christopher Choquette-Choo, Nicolas Papernot
arXiv pre-print 2019

How can we enable an IP owner to reliably claim ownership of a stolen model? We explore entangling of watermarks to task data to ensure that stolen models learn these watermarks as well. Our improved watermarks enable IP owners to claim ownership with 95% confidence in less than 10 queries to the stolen model.

Machine Unlearning
Lucas Bourtoule*, Varun Chandrasekaran*, Christopher Choquette-Choo*, Hengrui Jia*, Adelin Travers*, Baiwu Zhang*, David Lie, Nicolas Papernot
arXiv pre-print 2019
* Equal contribution. The names are ordered alphabetically.

How can we minimize the accuracy degradation and computational retraining cost using a true unlearning approach - retraining a model from scratch? We define a stricter unlearning requirement as well as an approach to drastically minimizing these risks in uniform and distribution-aware settings.

A multi-label, dual-output deep neural network for automated bug triaging
Christopher A. Choquette-Choo, David Sheldon, Jonny Proppe, John Alphonso-Gibbs, Harsha Gupta
ICMLA, 2019  |  DOI: 10.1109/ICMLA.2019.00161

By utilizing a model's own knowledge of an analogous lower-dimensionality solution-space, we can achieve higher accuracies in a higher-dimensionality solution-space.

Invited Talks
Adversarial Machine Learning: Ensuring Security and Privacy of ML Models and Sensitive Data.
Presented at the REWORK Responsible AI Summit 2019.
Professional Activities
Reviewer for 2021 IEEE Symposium on Security and Privacy
Reviewer for 2021 Journal of Machine Learning Research

This website was based off Jon Barron's.