Albert S. Berahas
Albert S. Berahas
Assistant Professor, University of Michigan
Dirección de correo verificada de umich.edu - Página principal
Título
Citado por
Citado por
Año
A multi-batch L-BFGS method for machine learning
AS Berahas, J Nocedal, M Takáč
Advances in Neural Information Processing Systems, 1063-1071, 2016
732016
An Investigation of Newton-Sketch and Subsampled Newton Methods
AS Berahas, R Bollapragada, J Nocedal
Optimization Methods and Software 35 (4), 661-680, 2020
702020
Balancing communication and computation in distributed optimization
AS Berahas, R Bollapragada, NS Keskar, E Wei
IEEE Transactions on Automatic Control 64 (8), 3141-3155, 2018
482018
Quasi-newton methods for deep learning: Forget the past, just sample
AS Berahas, M Jahani, P Richtárik, M Takáč
arXiv preprint arXiv:1901.09997, 2019
332019
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs
NS Keskar, AS Berahas
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2016
312016
A theoretical and empirical comparison of gradient approximations in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
Foundations of Computational Mathematics, 1-54, 2021
272021
Derivative-free optimization of noisy functions via quasi-Newton methods
AS Berahas, RH Byrd, J Nocedal
SIAM Journal on Optimization 29 (2), 965-993, 2019
272019
A robust multi-batch l-bfgs method for machine learning
AS Berahas, M Takáč
Optimization Methods and Software 35 (1), 191-219, 2020
202020
Sparse representation and least squares-based classification in face recognition
M Iliadis, L Spinoulas, AS Berahas, H Wang, AK Katsaggelos
2014 22nd European Signal Processing Conference (EUSIPCO), 526-530, 2014
112014
Global convergence rate analysis of a generic line search algorithm with noise
AS Berahas, L Cao, K Scheinberg
SIAM Journal on Optimization 31 (2), 1489-1518, 2021
62021
Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1
M Jahani, M Nazari, S Rusakov, AS Berahas, M Takáč
6th International Conference on Machine Learning, Optimization, and Data …, 2020
62020
Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
arXiv preprint arXiv:1905.13043, 2019
62019
Nested Distributed Gradient Methods with Adaptive Quantized Communication
AS Berahas, C Iakovidou, E Wei
58th IEEE Conference on Decision and Control (CDC), 1519-1525, 2019
42019
Multi-model robust error correction for face recognition
M Iliadis, L Spinoulas, AS Berahas, H Wang, AK Katsaggelos
2016 IEEE International Conference on Image Processing (ICIP), 3229-3233, 2016
22016
Sonia: A symmetric blockwise truncated optimization algorithm
M Jahani, M Nazari, R Tappenden, A Berahas, M Takác
International Conference on Artificial Intelligence and Statistics, 487-495, 2021
12021
Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
AS Berahas, FE Curtis, D Robinson, B Zhou
SIAM Journal on Optimization 31 (2), 1352-1379, 2021
12021
On the Convergence of Nested Decentralized Gradient Methods with Multiple Consensus and Gradient Steps
AS Berahas, R Bollapragada, E Wei
arXiv preprint arXiv:2006.01665, 2020
12020
Limited-memory BFGS with displacement aggregation
AS Berahas, FE Curtis, B Zhou
Mathematical Programming, 1-37, 2021
2021
Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations
Z Shi, NS Gulgec, AS Berahas, SN Pakzad, M Takáč
2020 19th IEEE International Conference on Machine Learning and Applications …, 2020
2020
An Investigation of Newton-Sketch and Subsampled Newton Methods: Supplementary Materials
AS Berahas, R Bollapragada, J Nocedal
2020
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20