Christopher A. Choquette-Choo

I am a researcher in Google Deepmind, on the Brain Privacy and Security team. My focuses are privacy-preserving and adversarial machine learning. I mainly work on DP algorithms and studying memorization and harms from language models; but, I've also worked on privacy auditing techniques, collaborative learning approaches, and methods for ownership-verification.
Previously, I was an AI Resident at Google and a researcher in the CleverHans Lab at the Vector Institute. I graduated from the University of Toronto, where I had a full scholarship.
Email: choquette[dot]christopher[at]gmail[dot]com
Research
I'm broadly interested in Machine Learning, with a focus on its intersection with security and privacy. Specific areas include adversarial ML, data privacy, memorization in language models, edeep learning, and collaborative learning. See my google scholar for an up-to-date list.
Technical Reports and Production Deployments |
|
![]() |
Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta 40th International Conference on Machine Learning (ICML), 2023conference (+oral) We achieve new state-of-the-art privacy-utility tradeoffs in DP ML. We outperform DP-SGD without any need for privacy amplification. |
![]() |
Enayat Ullah*, Christopher A. Choquette-Choo*, Peter Kairouz*, Sewoong Oh* 40th International Conference on Machine Learning (ICML), 2023conference We show how to achieve impressive compression rates in the online setting, i.e., without prior tuning. Our best algorithms can achieve close to the best-known compression rates. |
![]() |
Christopher A. Choquette-Choo, Arun Ganesh, Ryan McKenna, H. Brendan McMahan, Keith Rush, Abhradeep Guha Thakurta, Zheng Xu preprint We show that banded MF-DP-FTRL subsumes DP-SGD. We outperform DP-SGD with amplfiication across all privacy budgets and (often) the prior state-of-the-art: Multi-Epoch Matrix Factorization. |
Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr, Ludwig Schmidt preprint We introduce attacks against multi-modal models that subvert alignment techniques. In doing so, we show that current NLP attacks are not sufficiently powerful to evaluate adversarial alignment. |
|
Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr preprint We show that modern large-scale datasets can be cost-effectively poisoned. |
|
![]() |
Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini preprint We show that, without formal guarantees, distillation provides limited privacy benefits. We also show the first attack where a model can leak information about private data without ever having been queried on that private data. |
![]() |
Daphne Ippolito, Florian Tramèr*, Milad Nasr*, Chiyuan Zhang*, Matthew Jagielski*, Katherine Lee*, Christopher A. Choquette-Choo*, Nicholas Carlini * Equal contribution. The names are ordered randomly.preprint We show that models can paraphrase memorizations. Thus, models can evade filters designed specifically to prevent verbatim memorization, like those implemented for CoPilot. |
Congyu Fang*, Hengrui Jia*, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot * Equal contribution. Proceedings of the 8th IEEE Euro S&P 2023conference We break current Proof-of-Learning schemes. Though we design schemes that are robust to all current attacks, we show that it is an open problem to provide formal guarantees. |
|
Adam Dziedzic, Christopher A. Choquette-Choo, Natalie Dullerud, Vinith Menon Suriyakumar, Ali Shahin Shamsabadi, Muhammad Ahmad Kaleem, Somesh Jha, Nicolas Papernot, Xiao Wang Proceedings on 23rd Privacy Enhancing Technologies Symposium (PETS 2023)conference
|
|
![]() |
Wei-Ning Chen*, Christopher A. Choquette-Choo*, Peter Kairouz*, Ananda Theertha Suresh* * Equal contribution. The names are ordered alphabetically. International Conference on Machine Learning (ICML), 2022 conference Theory and Practice of Differential Privacy (TPDP) Workshop, 2022 workshop We characterize the fundamental communication costs of Federated Learning (FL) under Secure Aggregation (SecAgg) and Differential Privacy (DP), two privacy-preserving mechanisms that are commonly used with FL. We prove our optimality for worst-case settings which provides significant improvements over prior work, and show that improvements can be made under additional assumptions, e.g., data sparisty. Extensive empirical evaluations support our claims, showing costs as low as 1.2 bits per parameter on Stack Overflow with <4% relative decrease in test-time model accuracy. |
![]() |
Wei-Ning Chen*, Christopher A. Choquette-Choo*, Peter Kairouz* Privacy in Machine Learning Workshop at Neurips, 2021 workshop * Equal contribution. The names are ordered alphabetically. We show that in the worst-case, differentially-private federated learning with secure aggregation requires Ω(d) bits. Despite this, we discuss how to leverage near-sparsity to compress updates by more than 50x with modest noise multipliers of 0.4 by using sketching. |
Hengrui Jia^, Mohammad Yaghini^, Christopher A. Choquette-Choo*, Natalie Dullerud*, Anvith Thudi*, Varun Chandrasekaran, Nicolas Papernot Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference ^,* Equal contribution. The names are ordered alphabetically. How can we prove that a machine learning model owner trained their model? We define the problem of Proof-of-Learning (PoL) in machine learning and provide a method for it that is robust to several spoofing attacks. This protocol enables model ownership verification and robustness against byzantines workers (in a distributed learning setting). |
|
![]() |
Christopher A. Choquette-Choo*, Natalie Dullerud*, Adam Dziedzic*, Yunxiang Zhang*, Somesh Jha, Nicolas Papernot, Xiao Wang 9th International Conference on Learning Representations (ICLR), 2021 conference * Equal contribution. The names are ordered alphabetically. We design a protocol for collaborative learning that ensures both the privacy of the training data and confidentiality of the test data in a collaborative setup. Our protocol can provide several percentage-points improvement to models, especially on subpopulations that the model underperforms on, with a modest privacy budget usage of less than 20. Unlike prior work, we enable collaborative learning of heterogeneous models amongst participants. Unlike differentially private federated learning, which requires ~1 million participants, our protocol can work in regimes of ~100 participants. |
![]() |
Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot 38th International Conference on Machine Learning (ICML), 2021 conference What defenses properly defend against all membership inference threats? We expose and show that confidence-masking -- defensive obfuscation of confidence-vectors -- is not a viable defense to Membership Inference. We do this by introducing (3) label-only attacks, which bypass this defense and match typical confidence-vector attacks. In an extensive evaluation of defenses,including the first evaluation of data augmentations and transfer learning as defenses, we further show that Differential Privacy can defend against average- and worse-case Membership Inference attacks. |
![]() |
Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot Proceedings of 30th USENIX Security, 2021 conference How can we enable an IP owner to reliably claim ownership of a stolen model? We explore entangling of watermarks to task data to ensure that stolen models learn these watermarks as well. Our improved watermarks enable IP owners to claim ownership with 95% confidence in less than 10 queries to the stolen model. |
![]() |
Lucas Bourtoule*, Varun Chandrasekaran*, Christopher A. Choquette-Choo*, Hengrui Jia*, Adelin Travers*, Baiwu Zhang*, David Lie, Nicolas Papernot Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference * Equal contribution. The names are ordered alphabetically. How can we enable efficient and guaranteed retraining of machine learning models? We define requirements for machine unlearning and study a stricter unlearning requirement whereby unlearning a datapoint is guaranteed to be equivalent to as if we had never trained on it. To this end, we improve on the naive retraining-from-scratch approach to provide a better accuracy-efficieny tradeoff. We also study how a priori knowledge of the distribution of requests can further improve efficiency. |
![]() |
Christopher A. Choquette-Choo, David Sheldon, Jonny Proppe, John Alphonso-Gibbs, Harsha Gupta ICMLA, 2019  |  DOI: 10.1109/ICMLA.2019.00161 conference How can we triage bugs more effectively? By utilizing a model's own knowledge of an analogous lower-dimensionality solution-space (predicting the correct team assignment), we can achieve higher accuracies on the higher-dimensionality solution-space (predicting the correct engineer assignment). |
Research Talks
![]() |
Privacy Considerations of Production Machine Learning.
Presented at Ml Ops World: New York Area Summit Contact me for slides. Unfortunately, the video is not publicly available :(. |
![]() |
Adversarial Machine Learning: Ensuring Security and Privacy of ML Models and Sensitive Data.
Presented at the REWORK Responsible AI Summit 2019. Available as a part of the Privacy and Security in Machine Learning package. |
Professional Services
- PC for the 2024 IEEE Security and Privacy (S&P) conference.
- PC for the 2023 Generative AI + Law (GenLaw)'23 Workshop at Neurips.
- Reviewer for the 2023 Neural Information Processing Systems (Neurips).
- Reviewer for 2023 International Conference on Machine Learning (ICML)
- Session Chair of DL: Robustness for the 2022 International Conference on Machine Learning (ICML).
- Reviewer for the 2022 Neural Information Processing Systems (Neurips).
- Reviewer for Nature Machine Intelligence Journal
- Outstanding Reviewer for 2022 International Conference on Machine Learning (ICML)
- Reviewer for 2022 IEEE Transactions on Emerging Topics in Computing
- External Reviewer for 2022 USENIX Security Symposium
- External Reviewer for 2022 IEEE Symposium on Security and Privacy
- Reviewer for 2021 Journal of Machine Learning Research
- External Reviewer for 2021 International Conference on Machine Learning (ICML)
- External Reviewer for 2021 USENIX Security Symposium
- External Reviewer for 2021 IEEE Symposium on Security and Privacy
- Reviewer for the 2020 Machine Learning for the Developing World (ML4D) workshop at NeurIPS