Seguir
Andrew Kyle Lampinen
Andrew Kyle Lampinen
Research Scientist, DeepMind
Dirección de correo verificada de google.com - Página principal
Título
Citado por
Citado por
Año
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
7242022
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
arXiv preprint arXiv:2204.02329, 2022
1942022
Data distributional properties drive emergent in-context learning in transformers
S Chan, A Santoro, A Lampinen, J Wang, A Singh, P Richemond, ...
Advances in Neural Information Processing Systems 35, 18878-18891, 2022
1772022
Environmental drivers of systematicity and generalization in a situated agent
F Hill, A Lampinen, R Schneider, S Clark, M Botvinick, JL McClelland, ...
arXiv preprint arXiv:1910.00571, 2019
124*2019
What shapes feature representations? exploring datasets, architectures, and training
KL Hermann, AK Lampinen
Advances in Neural Information Processing Systems, 2020
1182020
Language models show human-like content effects on reasoning
I Dasgupta, AK Lampinen, SCY Chan, A Creswell, D Kumaran, ...
arXiv preprint arXiv:2207.07051, 2022
1172022
An analytic theory of generalization dynamics and transfer learning in deep linear networks
AK Lampinen, S Ganguli
7th International Conference on Learning Representations (ICLR 2019), 2018
1082018
Automated curricula through setter-solver interactions
S Racaniere, AK Lampinen, A Santoro, DP Reichert, V Firoiu, TP Lillicrap
8th International Conference on Learning Representations (ICLR 2020), 2019
92*2019
Integration of new information in memory: new insights from a complementary learning systems perspective
JL McClelland, BL McNaughton, AK Lampinen
Philosophical Transactions of the Royal Society B 375 (1799), 20190637, 2020
872020
Semantic exploration from language abstractions and pretrained representations
A Tam, N Rabinowitz, A Lampinen, NA Roy, S Chan, DJ Strouse, J Wang, ...
Advances in neural information processing systems 35, 25377-25389, 2022
532022
Improving the replicability of psychological science through pedagogy
RXD Hawkins, EN Smith, C Au, JM Arias, R Catapano, E Hermann, M Keil, ...
Advances in Methods and Practices in Psychological Science 1 (1), 7-18, 2018
52*2018
Symbolic behaviour in artificial intelligence
A Santoro, A Lampinen, K Mathewson, T Lillicrap, D Raposo
arXiv preprint arXiv:2102.03406, 2021
422021
Towards mental time travel: a hierarchical memory for reinforcement learning agents
A Lampinen, S Chan, A Banino, F Hill
Advances in Neural Information Processing Systems 34, 28182-28195, 2021
402021
Tell me why! explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
322022
Symbol tuning improves in-context learning in language models
J Wei, L Hou, A Lampinen, X Chen, D Huang, Y Tay, X Chen, Y Lu, ...
arXiv preprint arXiv:2305.08298, 2023
262023
Transformers generalize differently from information stored in context vs in weights
SCY Chan, I Dasgupta, J Kim, D Kumaran, AK Lampinen, F Hill
arXiv preprint arXiv:2210.05675, 2022
262022
One-shot and few-shot learning of word embeddings
AK Lampinen, JL McClelland
arXiv preprint arXiv:1710.10280, 2017
252017
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
AK Lampinen
arXiv preprint arXiv:2210.15303, 2022
192022
Transforming task representations to perform novel tasks
AK Lampinen, JL McClelland
Proceedings of the National Academy of Sciences 117 (52), 32970-32981, 2020
192020
Getting aligned on representational alignment
I Sucholutsky, L Muttenthaler, A Weller, A Peng, A Bobu, B Kim, BC Love, ...
arXiv preprint arXiv:2310.13018, 2023
172023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20