Seguir
Eric Wallace
Eric Wallace
Dirección de correo verificada de berkeley.edu - Página principal
Título
Citado por
Citado por
Año
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
T Shin, Y Razeghi, RL Logan IV, E Wallace, S Singh
EMNLP 2020, 2020
16492020
Extracting Training Data from Large Language Models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security 2021, 2020
16332020
Calibrate Before Use: Improving Few-Shot Performance of Language Models
TZ Zhao*, E Wallace*, S Feng, D Klein, S Singh
ICML 2021, 2021
11262021
Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
10742022
Universal Adversarial Triggers for Attacking and Analyzing NLP
E Wallace, S Feng, N Kandpal, M Gardner, S Singh
EMNLP 2019, 2019
8442019
InCoder: A Generative Model for Code Infilling and Synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
ICLR 2023, 2022
5292022
Extracting Training Data from Diffusion Models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
USENIX Security 2023, 2023
4752023
Evaluating Models' Local Decision Boundaries via Contrast Sets
M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ...
EMNLP Findings 2020, 2020
4742020
Pretrained Transformers Improve Out-of-Distribution Robustness
D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song
ACL 2020, 2020
4362020
Pathologies of Neural Models Make Interpretations Difficult
S Feng, E Wallace, II Grissom, M Iyyer, P Rodriguez, J Boyd-Graber
EMNLP 2018, 2018
3642018
Large Language Models Struggle to Learn Long-Tail Knowledge
N Kandpal, H Deng, A Roberts, E Wallace, C Raffel
ICML 2023, 2022
3182022
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
E Wallace*, Y Wang*, S Li, S Singh, M Gardner
EMNLP 2019, 2019
2992019
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
2922020
Deduplicating Training Data Mitigates Privacy Risks in Language Models
N Kandpal, E Wallace, C Raffel
ICML 2022, 2022
2152022
Scalable Extraction of Training Data from (Production) Language Models
M Nasr, N Carlini, J Hayase, M Jagielski, AF Cooper, D Ippolito, ...
arXiv preprint arXiv:2311.17035, 2023
1902023
Koala: A Dialogue Model for Academic Research
X Geng*, A Gudibande*, H Liu*, E Wallace*, P Abbeel, S Levine, D Song
BAIR Blog, 2023
1882023
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
RL Logan IV, I Balažević, E Wallace, F Petroni, S Singh, S Riedel
ACL Findings 2022, 2021
1862021
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples
E Wallace, P Rodriguez, S Feng, I Yamada, J Boyd-Graber
TACL 2019, 2019
173*2019
Concealed Data Poisoning Attacks on NLP Models
E Wallace*, TZ Zhao*, S Feng, S Singh
NAACL 2021, 2020
167*2020
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
E Wallace, J Tuyls, J Wang, S Subramanian, M Gardner, S Singh
EMNLP Demo, 2019
1582019
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20