Seguir
Mark Kurtz
Mark Kurtz
Neural Magic
Dirección de correo verificada de neuralmagic.com
Título
Citado por
Citado por
Año
Inducing and exploiting activation sparsity for fast inference on deep neural networks
M Kurtz, J Kopinsky, R Gelashvili, A Matveev, J Carr, M Goin, W Leiserson, ...
International Conference on Machine Learning, 5533-5543, 2020
1382020
The optimal bert surgeon: Scalable and accurate second-order pruning for large language models
E Kurtic, D Campos, T Nguyen, E Frantar, M Kurtz, B Fineran, M Goin, ...
arXiv preprint arXiv:2203.07259, 2022
712022
How well do sparse imagenet models transfer?
E Iofinova, A Peste, M Kurtz, D Alistarh
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
312022
Sparse* bert: Sparse models are robust
D Campos, A Marques, T Nguyen, M Kurtz, C Zhai
arXiv preprint arXiv:2205.12452, 2022
42022
System and method of accelerating execution of a neural network
A Matveev, D Alistarh, J Kopinsky, R Gelashvili, M Kurtz, N Shavit
US Patent 11,195,095, 2021
42021
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
D Campos, A Marques, M Kurtz, CX Zhai
arXiv preprint arXiv:2303.17612, 2023
12023
Sparse* BERT: Sparse Models Generalize To New tasks and Domains
D Campos, A Marques, T Nguyen, M Kurtz, CX Zhai
arXiv preprint arXiv:2205.12452, 2022
12022
System and method of training a neural network
M Kurtz, D Alistarh
US Patent App. 17/149,043, 2021
12021
System and method of accelerating execution of a neural network
A Matveev, D Alistarh, J Kopinsky, R Gelashvili, M Kurtz, N Shavit
US Patent 11,797,855, 2023
2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–9