Christopher A. Choquette-Choo

Christopher A. Choquette-Choo

I am an AI Resident (2020) at Google. I research model interpretability, federated learning, and differential privacy.


Previously, I worked on adversarial machine learning research in the CleverHans Lab at the Vector Institute. I explored privacy attacks and defenses for machine learning, confidential and private collaborative learning protocols, and protecting machine learning from IP theft.

I also worked on DualDNN while I was at Intel Corp., as well as differential privacy and AutoML while I was at Georgian Partners LP. I graduated with honors from Engineering Science, with a major in Robotics, at the University of Toronto, where I had a full scholarship for leadership and academic achievement.

Email: choquette[dot]christopher[at]gmail[dot]com

Research

I'm broadly interested in Machine Learning, with a focus on its intersection with security and privacy. Specific areas include adversarial ML, data privacy, deep learning, and collaborative learning. See my google scholar for an up-to-date list.

Proof-of-Learning: Definitions
Hengrui Jia^, Mohammad Yaghini^, Christopher A. Choquette-Choo*, Natalie Dullerud*, Anvith Thudi*, Varun Chandrasekaran, Nicolas Papernot
Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference
^,* Equal contribution. The names are ordered alphabetically.

How can we prove that a machine learning model owner trained their model? We define the problem of Proof-of-Learning (PoL) in machine learning and provide a method for it that is robust to several spoofing attacks. This protocol enables model ownership verification and robustness against byzantines workers (in a distributed learning setting).

CaPC Learning: Confidential and Private Collaborative Learning
Christopher A. Choquette-Choo*, Natalie Dullerud*, Adam Dziedzic*, Yunxiang Zhang*, Somesh Jha, Nicolas Papernot, Xiao Wang
9th International Conference on Learning Representations (ICLR), 2021 conference
* Equal contribution. The names are ordered alphabetically.

We design a protocol for collaborative learning that ensures both the privacy of the training data and confidentiality of the test data in a collaborative setup. Our protocol can provide several percentage-points improvement to models, especially on subpopulations that the model underperforms on, with a modest privacy budget usage of less than 20. Unlike prior work, we enable collaborative learning of heterogeneous models amongst participants. Unlike differentially private federated learning, which requires ~1 million participants, our protocol can work in regimes of ~100 participants.

Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot
38th International Conference on Machine Learning (ICML), 2021 conference

What defenses properly defend against all membership inference threats? We expose and show that confidence-masking -- defensive obfuscation of confidence-vectors -- is not a viable defense to Membership Inference. We do this by introducing (3) label-only attacks, which bypass this defense and match typical confidence-vector attacks. In an extensive evaluation of defenses,including the first evaluation of data augmentations and transfer learning as defenses, we further show that Differential Privacy can defend against average- and worse-case Membership Inference attacks.

Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot
Proceedings of 30th USENIX Security, 2021 conference

How can we enable an IP owner to reliably claim ownership of a stolen model? We explore entangling of watermarks to task data to ensure that stolen models learn these watermarks as well. Our improved watermarks enable IP owners to claim ownership with 95% confidence in less than 10 queries to the stolen model.

Machine Unlearning
Lucas Bourtoule*, Varun Chandrasekaran*, Christopher A. Choquette-Choo*, Hengrui Jia*, Adelin Travers*, Baiwu Zhang*, David Lie, Nicolas Papernot
Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference
* Equal contribution. The names are ordered alphabetically.

How can we enable efficient and guaranteed retraining of machine learning models? We define requirements for machine unlearning and study a stricter unlearning requirement whereby unlearning a datapoint is guaranteed to be equivalent to as if we had never trained on it. To this end, we improve on the naive retraining-from-scratch approach to provide a better accuracy-efficieny tradeoff. We also study how a priori knowledge of the distribution of requests can further improve efficiency.

A multi-label, dual-output deep neural network for automated bug triaging
Christopher A. Choquette-Choo, David Sheldon, Jonny Proppe, John Alphonso-Gibbs, Harsha Gupta
ICMLA, 2019  |  DOI: 10.1109/ICMLA.2019.00161 conference

How can we triage bugs more effectively? By utilizing a model's own knowledge of an analogous lower-dimensionality solution-space (predicting the correct team assignment), we can achieve higher accuracies on the higher-dimensionality solution-space (predicting the correct engineer assignment).


Research Talks

Adversarial Machine Learning: Ensuring Security and Privacy of ML Models and Sensitive Data.
Presented at the REWORK Responsible AI Summit 2019.
Available as a part of the Privacy and Security in Machine Learning package.

Professional Services

  • External Reviewer for 2022 USENIX Security Symposium
  • External Reviewer for 2022 IEEE Symposium on Security and Privacy
  • Reviewer for 2021 Journal of Machine Learning Research
  • External Reviewer for 2021 International Conference on Machine Learning (ICML)
  • External Reviewer for 2021 USENIX Security Symposium
  • External Reviewer for 2021 IEEE Symposium on Security and Privacy
  • Reviewer for the 2020 Machine Learning for the Developing World (ML4D) workshop at NeurIPS