Denis Yarats
Denis Yarats
Cofounder and CTO, Perplexity AI
Dirección de correo verificada de - Página principal
Citado por
Citado por
Convolutional sequence to sequence learning
J Gehring, M Auli, D Grangier, D Yarats, YN Dauphin
ICML 2017, 2017
Image augmentation is all you need: Regularizing deep reinforcement learning from pixels
D Yarats, I Kostrikov, R Fergus
ICLR 2021, 2020
Deal or no deal? end-to-end learning for negotiation dialogues
M Lewis, D Yarats, YN Dauphin, D Parikh, D Batra
EMNLP 2017, 2017
Improving sample efficiency in model-free reinforcement learning from images
D Yarats, A Zhang, I Kostrikov, B Amos, J Pineau, R Fergus
AAAI 2021, 2019
Mastering visual continuous control: Improved data-augmented reinforcement learning
D Yarats, R Fergus, A Lazaric, L Pinto
ICLR 2022, 2021
Reinforcement learning with prototypical representations
D Yarats, R Fergus, A Lazaric, L Pinto
ICML 2021, 2021
Automatic data augmentation for generalization in deep reinforcement learning
R Raileanu, M Goldstein, D Yarats, I Kostrikov, R Fergus
NeurIPS 2021, 2020
Generalized inner loop meta-learning
E Grefenstette, B Amos, D Yarats, PM Htut, A Molchanov, F Meier, D Kiela, ...
arXiv 2019, 2019
Quasi-hyperbolic momentum and adam for deep learning
J Ma, D Yarats
ICLR 2019, 2018
URLB: Unsupervised Reinforcement Learning Benchmark
M Laskin, D Yarats, H Liu, K Lee, A Zhan, K Lu, C Cang, L Pinto, P Abbeel
NeurIPS 2021, 2021
Don't change the algorithm, change the data: Exploratory data for offline reinforcement learning
D Yarats, D Brandfonbrener, H Liu, M Laskin, P Abbeel, A Lazaric, L Pinto
arXiv preprint arXiv:2201.13425, 2022
On the adequacy of untuned warmup for adaptive optimization
J Ma, D Yarats
AAAI 2021, 2019
On the model-based stochastic value gradient for continuous reinforcement learning
B Amos, S Stanton, D Yarats, AG Wilson
L4DC 2021, 2020
Hierarchical decision making by generating and following natural language instructions
H Hu, D Yarats, Q Gong, Y Tian, M Lewis
NeurIPS 2019, 2019
The differentiable cross-entropy method
B Amos, D Yarats
ICML 2020, 2020
Hierarchical text generation and planning for strategic dialogue
D Yarats, M Lewis
ICML 2018, 2018
Cic: Contrastive intrinsic control for unsupervised skill discovery
M Laskin, H Liu, XB Peng, D Yarats, A Rajeswaran, P Abbeel
arXiv preprint arXiv:2202.00161, 2022
Soft actor-critic (sac) implementation in pytorch
D Yarats, I Kostrikov, 2020
Watch and match: Supercharging imitation with regularized optimal transport
S Haldar, V Mathur, D Yarats, L Pinto
Conference on Robot Learning, 32-43, 2023
Unsupervised reinforcement learning with contrastive intrinsic control
M Laskin, H Liu, XB Peng, D Yarats, A Rajeswaran, P Abbeel
Advances in Neural Information Processing Systems 35, 34478-34491, 2022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20