Seguir
Byung-Doh Oh
Título
Citado por
Citado por
Año
Why does surprisal from larger Transformer-based language models provide a poorer fit to human reading times?
BD Oh, W Schuler
Transactions of the Association for Computational Linguistics 11, 336-350, 2023
1192023
Comparison of structural parsers and neural language models as surprisal estimators
BD Oh, C Clark, W Schuler
Frontiers in Artificial Intelligence 5, 777963, 2022
532022
Transformer-based language model surprisal predicts human reading times best with about two billion training tokens
BD Oh, W Schuler
Findings of the Association for Computational Linguistics: EMNLP 2023, 1915-1921, 2023
32*2023
Entropy- and distance-based predictors from GPT-2 attention patterns predict reading times over and above GPT-2 surprisal
BD Oh, W Schuler
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
212022
Surprisal estimators for human reading times need character models
BD Oh, C Clark, W Schuler
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
202021
Modeling morphological learning, typology, and change: What can the neural sequence-to-sequence framework contribute?
M Elsner, AD Sims, A Erdmann, A Hernandez, E Jaffe, L Jin, ...
Journal of Language Modelling 7 (1), 53-98, 2019
192019
Frequency explains the inverse correlation of large language models' size, training data amount, and surprisal's fit to reading times
BD Oh, S Yue, W Schuler
Proceedings of the 18th Conference of the European Chapter of the …, 2024
132024
Leading whitespaces of language models' subword vocabulary pose a confound for calculating word probabilities
BD Oh, W Schuler
Proceedings of the 2024 Conference on Empirical Methods in Natural Language …, 2024
112024
Team Ohio State at CMCL 2021 shared task: Fine-tuned RoBERTa for eye-tracking data prediction
BD Oh
Proceedings of the Workshop on Cognitive Modeling and Computational …, 2021
62021
Character-based PCFG induction for modeling the syntactic acquisition of morphologically rich languages
L Jin, BD Oh, W Schuler
Findings of the Association for Computational Linguistics: EMNLP 2021, 4367-4378, 2021
52021
Contributions of propositional content and syntactic category information in sentence processing
BD Oh, W Schuler
Proceedings of the Workshop on Cognitive Modeling and Computational …, 2021
5*2021
THOMAS: The hegemonic OSU morphological analyzer using seq2seq
BD Oh, P Maneriker, N Jiang
Proceedings of the 16th Workshop on Computational Research in Phonetics …, 2019
52019
Token-wise decomposition of autoregressive language model hidden states for analyzing model predictions
BD Oh, W Schuler
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
42023
Exploring English online research and comprehension strategies of Korean college students
BD Oh
PQDT-Global, 2018
32018
Linear recency bias during training improves Transformers' fit to reading times
C Clark, BD Oh, W Schuler
arXiv preprint arXiv:2409.11250, 2024
22024
Coreference-aware surprisal predicts brain response
E Jaffe, BD Oh, W Schuler
Findings of the Association for Computational Linguistics: EMNLP 2021, 3351-3356, 2021
12021
Predicting L2 writing proficiency with computational indices based on N-grams
BD Oh
외국어교육연구 (Foreign Language Education Research) 21, 1-20, 2017
12017
The impact of token granularity on the predictive power of language model surprisal
BD Oh, W Schuler
arXiv preprint arXiv:2412.11940, 2024
2024
Empirical shortcomings of Transformer-based large language models as expectation-based models of human sentence processing
BD Oh
The Ohio State University, 2024
2024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–19