Pierre Richemond
Pierre Richemond
Google DeepMind
Dirección de correo verificada de
Citado por
Citado por
Bootstrap your own latent-a new approach to self-supervised learning
JB Grill, F Strub, F Altché, C Tallec, P Richemond, E Buchatskaya, ...
Advances in neural information processing systems 33, 21271-21284, 2020
Data distributional properties drive emergent in-context learning in transformers
S Chan, A Santoro, A Lampinen, J Wang, A Singh, P Richemond, ...
Advances in Neural Information Processing Systems 35, 18878-18891, 2022
Byol works even without batch statistics
PH Richemond, JB Grill, F Altché, C Tallec, F Strub, A Brock, S Smith, ...
arXiv preprint arXiv:2010.10241, 2020
k. kavukcuoglu, R
JB Grill, F Strub, F Altché, C Tallec, P Richemond, E Buchatskaya, ...
Munos, and M. Valko,“Bootstrap your own latent-a new approach to self …, 2020
Continuous diffusion for categorical data
S Dieleman, L Sartran, A Roshannai, N Savinov, Y Ganin, PH Richemond, ...
arXiv preprint arXiv:2211.15089, 2022
Understanding self-predictive learning for reinforcement learning
Y Tang, ZD Guo, PH Richemond, BA Pires, Y Chandak, R Munos, ...
International Conference on Machine Learning, 33632-33656, 2023
On Wasserstein reinforcement learning and the Fokker-Planck equation
PH Richemond, B Maginnis
arXiv preprint arXiv:1712.07185, 2017
Categorical SDEs with simplex diffusion
PH Richemond, S Dieleman, A Doucet
arXiv preprint arXiv:2210.14784, 2022
Generalized Preference Optimization: A Unified Approach to Offline Alignment
Y Tang, ZD Guo, Z Zheng, D Calandriello, R Munos, M Rowland, ...
arXiv preprint arXiv:2402.05749, 2024
Zipfian environments for reinforcement learning
SCY Chan, AK Lampinen, PH Richemond, F Hill
Conference on Lifelong Learning Agents, 406-429, 2022
Semppl: Predicting pseudo-labels for better contrastive representations
M Bošnjak, PH Richemond, N Tomasev, F Strub, JC Walker, F Hill, ...
arXiv preprint arXiv:2301.05158, 2023
Memory-efficient episodic control reinforcement learning with dynamic online k-means
A Agostinelli, K Arulkumaran, M Sarrico, P Richemond, AA Bharath
arXiv preprint arXiv:1911.09560, 2019
Sample-efficient reinforcement learning with maximum entropy mellowmax episodic control
M Sarrico, K Arulkumaran, A Agostinelli, P Richemond, AA Bharath
arXiv preprint arXiv:1911.09615, 2019
A short variational proof of equivalence between policy gradients and soft q learning
PH Richemond, B Maginnis
arXiv preprint arXiv:1712.08650, 2017
Biologically inspired architectures for sample-efficient deep reinforcement learning
PH Richemond, A Kolbeinsson, Y Guo
arXiv preprint arXiv:1911.11285, 2019
Combining learning rate decay and weight decay with complexity gradient descent-Part I
PH Richemond, Y Guo
arXiv preprint arXiv:1902.02881, 2019
Efficiently applying attention to sequential data with the Recurrent Discounted Attention unit
B Maginnis, PH Richemond
arXiv preprint arXiv:1705.08480, 2017
The edge of orthogonality: a simple view of what makes BYOL tick
PH Richemond, A Tam, Y Tang, F Strub, B Piot, F Hill
International Conference on Machine Learning, 29063-29081, 2023
Human alignment of large language models through online preference optimisation
D Calandriello, D Guo, R Munos, M Rowland, Y Tang, BA Pires, ...
arXiv preprint arXiv:2403.08635, 2024
Offline Regularised Reinforcement Learning for Large Language Models Alignment
PH Richemond, Y Tang, D Guo, D Calandriello, MG Azar, R Rafailov, ...
arXiv preprint arXiv:2405.19107, 2024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20