Seguir
Fahim Dalvi
Fahim Dalvi
Qatar Computing Research Institute
Dirección de correo verificada de hbku.edu.qa - Página principal
Título
Citado por
Citado por
Año
What do Neural Machine Translation Models Learn about Morphology?
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
arXiv preprint arXiv:1704.03471, 2017
4482017
Fighting the COVID-19 infodemic: modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society
F Alam, S Shaar, F Dalvi, H Sajjad, A Nikolov, H Mubarak, GDS Martino, ...
arXiv preprint arXiv:2005.00033, 2020
276*2020
Identifying and Controlling Important Neurons in Neural Machine Translation
A Bau, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:1811.01157, 2018
1972018
What is one grain of sand in the desert? analyzing individual neurons in deep nlp models
F Dalvi, N Durrani, H Sajjad, Y Belinkov, A Bau, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 6309-6317, 2019
1902019
Findings of the IWSLT 2020 Evaluation Campaign
E Ansari, A Axelrod, N Bach, O Bojar, R Cattoni, F Dalvi, N Durrani, ...
Proceedings of the 17th International Conference on Spoken Language …, 2020
1312020
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks
Y Belinkov, L Màrquez, H Sajjad, N Durrani, F Dalvi, J Glass
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
1262017
On the effect of dropping layers of pre-trained transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
Computer Speech & Language 77, 101429, 2023
1122023
Poor man’s bert: Smaller and faster transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
arXiv preprint arXiv:2004.03844 2 (2), 2020
1112020
Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation
F Dalvi, N Durrani, H Sajjad, S Vogel
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
1052018
Analyzing Individual Neurons in Pre-trained Language Models
N Durrani, H Sajjad, F Dalvi, Y Belinkov
arXiv preprint arXiv:2010.02695, 2020
972020
Analyzing redundancy in pretrained transformer models
F Dalvi, H Sajjad, N Durrani, Y Belinkov
arXiv preprint arXiv:2004.04010, 2020
922020
Similarity Analysis of Contextual Word Representation Models
JM Wu, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:2005.01172, 2020
802020
On the Linguistic Representational Power of Neural Machine Translation Models
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
Computational Linguistics 46 (1), 1-52, 2020
752020
Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder
F Dalvi, N Durrani, H Sajjad, Y Belinkov, S Vogel
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
742017
Neuron-level interpretation of deep nlp models: A survey
H Sajjad, N Durrani, F Dalvi
Transactions of the Association for Computational Linguistics 10, 1285-1303, 2022
662022
NeuroX: A toolkit for analyzing individual neurons in neural networks
F Dalvi, A Nortonsmith, A Bau, Y Belinkov, H Sajjad, N Durrani, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 9851-9852, 2019
662019
Discovering Latent Concepts Learned in BERT
F Dalvi, AR Khan, F Alam, N Durrani, J Xu, H Sajjad
International Conference on Learning Representations, 2021
652021
Neural Machine Translation Training in a Multi-Domain Scenario
H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel
arXiv preprint arXiv:1708.08712, 2017
552017
How transfer learning impacts linguistic knowledge in deep NLP models?
N Durrani, H Sajjad, F Dalvi
arXiv preprint arXiv:2105.15179, 2021
542021
One Size Does Not Fit All: Comparing NMT Representations of Different Granularities
N Durrani, F Dalvi, H Sajjad, Y Belinkov, P Nakov
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
512019
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20