Seguir
James A. Michaelov
James A. Michaelov
Dirección de correo verificada de ucsd.edu - Página principal
Título
Citado por
Citado por
Año
Do Large Language Models know what humans know?
S Trott, C Jones, T Chang, J Michaelov, B Bergen
Cognitive Science 47 (7), e13309, 2023
482023
How well does surprisal explain N400 amplitude under different experimental conditions?
JA Michaelov, BK Bergen
Proceedings of the 24th Conference on Computational Natural Language …, 2020
412020
So Cloze yet so Far: N400 amplitude is better predicted by distributional information than human predictability judgements
JA Michaelov, S Coulson, BK Bergen
IEEE Transactions on Cognitive and Developmental Systems, 2022
392022
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
JA Michaelov, MD Bardolph, S Coulson, BK Bergen
Proceedings of the Annual Meeting of the Cognitive Science Society 43, 2021
282021
Distrubutional Semantics Still Can't Account for Affordances
CR Jones, TA Chang, S Coulson, JA Michaelov, S Trott, B Bergen
Proceedings of the Annual Meeting of the Cognitive Science Society 44 (44), 2022
172022
Strong Prediction: Language model surprisal explains multiple N400 effects
JA Michaelov, MD Bardolph, CK Van Petten, BK Bergen, S Coulson
Neurobiology of language, 1-29, 2024
132024
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers
JA Michaelov, BK Bergen
arXiv preprint arXiv:2212.08700, 2022
72022
Collateral facilitation in humans and language models
JA Michaelov, BK Bergen
Proceedings of the 26th Conference on Computational Natural Language …, 2022
72022
Measuring Sentence Information via Surprisal: Theoretical and Clinical Implications in Nonfluent Aphasia
N Rezaii, J Michaelov, S Josephy‐Hernandez, B Ren, D Hochberg, ...
Annals of Neurology 94 (4), 647-657, 2023
42023
The Young and the Old: (t) Release in Elderspeak
J Michaelov
Lifespans and Styles 3 (1), 2-9, 2017
42017
The more human-like the language model, the more surprisal is the best predictor of N400 amplitude
J Michaelov, B Bergen
NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems, 2022
32022
Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
JA Michaelov, BK Bergen
Proceedings of the 29th International Conference on Computational …, 2022
32022
Can Peanuts Fall in Love with Distributional Semantics?
JA Michaelov, S Coulson, BK Bergen
arXiv preprint arXiv:2301.08731, 2023
22023
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
JA Michaelov, C Arnett, TA Chang, BK Bergen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
12023
Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models
C Arnett, TA Chang, JA Michaelov, BK Bergen
The 3rd Multilingual Representation Learning Workshop, 2023
12023
Ignoring the alternatives: The N400 is sensitive to stimulus preactivation alone
JA Michaelov, BK Bergen
Cortex 168, 82-101, 2023
2023
Emergent inabilities? Inverse scaling over the course of pretraining
JA Michaelov, BK Bergen
Findings of the Association for Computational Linguistics: EMNLP 2023, 14607 …, 2023
2023
Can distributional semantics explain performance on the false belief task?
S Trott, CR Jones, T Chang, J Michaelov, B Bergen
OSF, 2022
2022
Are neural language models sensitive to false belief? A computational study.
S Trott, B Bergen, CR Jones, T Chang, J Michaelov
OSF, 2022
2022
Surprisal is a good predictor of the N400 effect, but not for semantic relations
J Michaelov, M Bardolph, S Coulson, B Bergen
Architectures and Mechanisms of Language Processing, 2020
2020
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20