Seguir
Kevin Clark
Kevin Clark
Dirección de correo verificada de cs.stanford.edu - Página principal
Título
Citado por
Citado por
Año
Electra: Pre-training text encoders as discriminators rather than generators
K Clark, MT Luong, QV Le, CD Manning
arXiv preprint arXiv:2003.10555, 2020
36682020
What does bert look at? an analysis of bert's attention
K Clark, U Khandelwal, O Levy, CD Manning
arXiv preprint arXiv:1906.04341, 2019
15832019
Deep reinforcement learning for mention-ranking coreference models
K Clark, CD Manning
arXiv preprint arXiv:1609.08667, 2016
4852016
Improving coreference resolution by learning entity-level distributed representations
K Clark, CD Manning
arXiv preprint arXiv:1606.01323, 2016
4292016
Semi-Supervised Sequence Modeling with Cross-View Training
K Clark, MT Luong, CD Manning, QV Le
arXiv preprint arXiv:1809.08370, 2018
4282018
Inducing domain-specific sentiment lexicons from unlabeled corpora
WL Hamilton, K Clark, J Leskovec, D Jurafsky
Proceedings of the conference on empirical methods in natural language …, 2016
4182016
Large-scale analysis of counseling conversations: An application of natural language processing to mental health
T Althoff, K Clark, J Leskovec
Transactions of the Association for Computational Linguistics 4, 463-476, 2016
3292016
Emergent linguistic structure in artificial neural networks trained by self-supervision
CD Manning, K Clark, J Hewitt, U Khandelwal, O Levy
Proceedings of the National Academy of Sciences 117 (48), 30046-30054, 2020
3122020
Towards expert-level medical question answering with large language models
K Singhal, T Tu, J Gottweis, R Sayres, E Wulczyn, L Hou, K Clark, S Pfohl, ...
arXiv preprint arXiv:2305.09617, 2023
2672023
Entity-centric coreference resolution with model stacking
K Clark, CD Manning
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
2592015
Bam! born-again multi-task networks for natural language understanding
K Clark, MT Luong, U Khandelwal, CD Manning, QV Le
arXiv preprint arXiv:1907.04829, 2019
2112019
Sample efficient text summarization using a single pre-trained transformer
U Khandelwal, K Clark, D Jurafsky, L Kaiser
arXiv preprint arXiv:1905.08836, 2019
952019
Pre-training transformers as energy-based cloze models
K Clark, MT Luong, QV Le, CD Manning
arXiv preprint arXiv:2012.08561, 2020
772020
Revminer: An extractive interface for navigating reviews on a smartphone
J Huang, O Etzioni, L Zettlemoyer, K Clark, C Lee
Proceedings of the 25th annual ACM symposium on User interface software and …, 2012
692012
Text-to-Image Diffusion Models are Zero Shot Classifiers
K Clark, P Jaini
Advances in Neural Information Processing Systems 36, 2024
322024
Directly fine-tuning diffusion models on differentiable rewards
K Clark, P Vicol, K Swersky, DJ Fleet
arXiv preprint arXiv:2309.17400, 2023
212023
Intriguing properties of generative classifiers
P Jaini, K Clark, R Geirhos
arXiv preprint arXiv:2309.16779, 2023
152023
Meta-learning fast weight language models
K Clark, K Guu, MW Chang, P Pasupat, G Hinton, M Norouzi
arXiv preprint arXiv:2212.02475, 2022
62022
Stanford at TAC KBP 2017: Building a Trilingual Relational Knowledge Graph.
AT Chaganty, A Paranjape, J Bolton, M Lamm, J Lei, A See, K Clark, ...
TAC, 2017
62017
Contrastive pre-training for language tasks
TM Luong, QV Le, KS Clark
US Patent 11,449,684, 2022
52022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20