Optimal rates for multi-pass stochastic gradient methods J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421, 2017 | 119* | 2017 |
Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces J Lin, A Rudi, L Rosasco, V Cevher Applied and Computational Harmonic Analysis 48 (3), 868-890, 2020 | 89 | 2020 |
Generalization properties and implicit regularization for multiple passes SGM J Lin, R Camoriano, L Rosasco International Conference on Machine Learning, 2340-2348, 2016 | 78 | 2016 |
New bounds for restricted isometry constants with coherent tight frames J Lin, S Li, Y Shen IEEE Transactions on Signal Processing 61 (3), 611-621, 2013 | 52 | 2013 |
Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO J Lin, S Li Applied and Computational Harmonic Analysis 37 (1), 126-139, 2014 | 45 | 2014 |
Iterative regularization for learning with convex loss functions J Lin, L Rosasco, DX Zhou Journal of Machine Learning Research 17 (1), 2718-2755, 2016 | 43 | 2016 |
Compressed sensing with coherent tight frame via lq minimization S Li, J Lin Inverse Probl Imaging 8 (3), 761-777, 2014 | 40* | 2014 |
Online learning algorithms can converge comparably fast as batch learning J Lin, DX Zhou IEEE Transactions on Neural Networks and Learning Systems 29 (6), 2367-2378, 2017 | 36 | 2017 |
Block sparse recovery via mixed l 2/l 1 minimization JH Lin, S Li Acta Mathematica Sinica, English Series 29 (7), 1401-1412, 2013 | 35 | 2013 |
Learning theory of randomized Kaczmarz algorithm J Lin, DX Zhou Journal of Machine Learning Research 16 (1), 3341-3365, 2015 | 34 | 2015 |
Optimal convergence for distributed learning with stochastic gradient methods and spectral algorithms J Lin, V Cevher The Journal of Machine Learning Research 21 (1), 5852-5914, 2020 | 27 | 2020 |
Optimal distributed learning with multi-pass stochastic gradient methods J Lin, V Cevher International Conference on Machine Learning, 3092-3101, 2018 | 26 | 2018 |
Restricted-Isometry Properties Adapted to Frames for Nonconvex-Analysis J Lin, S Li IEEE Transactions on Information Theory 62 (8), 4733-4747, 2016 | 24 | 2016 |
Compressed data separation with redundant dictionaries J Lin, S Li, Y Shen IEEE Transactions on Information Theory 59 (7), 4309-4315, 2013 | 24 | 2013 |
Online pairwise learning algorithms with convex loss functions J Lin, Y Lei, B Zhang, DX Zhou Information Sciences 406, 57-70, 2017 | 23 | 2017 |
Nonuniform support recovery from noisy random measurements by orthogonal matching pursuit J Lin, S Li Journal of Approximation Theory 165 (1), 20-40, 2013 | 16 | 2013 |
Convergence of projected Landweber iteration for matrix rank minimization J Lin, S Li Applied and Computational Harmonic Analysis 36 (2), 316-325, 2014 | 15 | 2014 |
Optimal convergence for distributed learning with stochastic gradient methods and spectral-regularization algorithms J Lin, V Cevher arXiv preprint arXiv:1801.07226, 2018 | 14 | 2018 |
Modified Fejér sequences and applications J Lin, L Rosasco, S Villa, DX Zhou Computational Optimization and Applications 71, 95-113, 2018 | 11 | 2018 |
Optimal Rates for Learning with Nystr\" om Stochastic Gradient Methods J Lin, L Rosasco arXiv preprint arXiv:1710.07797, 2017 | 11 | 2017 |