Seguir
Florian Tramèr
Florian Tramèr
Assistant Professor of Computer Science, ETH Zurich
Dirección de correo verificada de inf.ethz.ch - Página principal
Título
Citado por
Citado por
Año
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Foundations and Trends® in Machine Learning 14 (1), 2019
40812019
Ensemble Adversarial Training: Attacks and Defenses
F Tramèr, A Kurakin, N Papernot, I Goodfellow, D Boneh, P McDaniel
International Conference on Learning Representations (ICLR), 2018
27352018
Stealing Machine Learning Models via Prediction APIs
F Tramèr, F Zhang, A Juels, MK Reiter, T Ristenpart
25th USENIX security symposium (USENIX Security 16), 601-618, 2016
17732016
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
16802021
On evaluating adversarial robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
7972019
Extracting Training Data from Large Language Models
N Carlini, F Tramèr, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633--2650, 2021
7832021
On adaptive attacks to adversarial example defenses
F Tramèr, N Carlini, W Brendel, A Madry
Conference on Neural Information Processing Systems (NeurIPS) 33, 2020
6772020
The space of transferable adversarial examples
F Tramèr, N Papernot, I Goodfellow, D Boneh, P McDaniel
arXiv preprint arXiv:1704.03453, 2017
5792017
Physical adversarial examples for object detectors
K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, F Tramèr, A Prakash, ...
12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018
4272018
Slalom: Fast, verifiable and private execution of neural networks in trusted hardware
F Tramèr, D Boneh
International Conference on Learning Representations (ICLR), 2019
3452019
Adversarial training and robustness for multiple perturbations
F Tramèr, D Boneh
Conference on Neural Information Processing Systems (NeurIPS) 32, 2019
3362019
Label-Only Membership Inference Attacks
CAC Choo, F Tramèr, N Carlini, N Papernot
International Conference on Machine Learning (ICML), 1964--1974, 2021
278*2021
Sentinet: Detecting localized universal attacks against deep learning systems
E Chou, F Tramer, G Pellegrino
2020 IEEE Security and Privacy Workshops (SPW), 48-54, 2020
2392020
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
arXiv preprint arXiv:1912.04977, 0
226*
Fairtest: Discovering unwarranted associations in data-driven applications
F Tramer, V Atlidakis, R Geambasu, D Hsu, JP Hubaux, M Humbert, ...
IEEE European Symposium on Security and Privacy (EuroS&P), 401-416, 2017
200*2017
Membership Inference Attacks From First Principles
N Carlini, S Chien, M Nasr, S Song, A Terzis, F Tramèr
43rd IEEE Symposium on Security and Privacy (S&P 2022), 2022
1832022
Quantifying memorization across neural language models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramèr, C Zhang
International Conference on Learning Representations (ICLR), 2023
1742023
Differentially Private Learning Needs Better Features (or Much More Data)
F Tramèr, D Boneh
International Conference on Learning Representations (ICLR), 2021
1652021
Large language models can be strong differentially private learners
X Li, F Tramèr, P Liang, T Hashimoto
International Conference on Learning Representations (ICLR), 2022
1322022
Government by algorithm: Artificial intelligence in federal administrative agencies
DF Engstrom, DE Ho, CM Sharkey, MF Cuéllar
NYU School of Law, Public Law Research Paper, 2020
1292020
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20