Anastasia Koloskova
Anastasia Koloskova
PhD student, EPFL
Verified email at - Homepage
Cited by
Cited by
Decentralized stochastic optimization and gossip algorithms with compressed communication
A Koloskova*, SU Stich*, M Jaggi
ICML 2019 - Proceedings of the 36th International Conference on Machine Learning, 2019
A unified theory of decentralized sgd with changing topology and local updates
A Koloskova*, N Loizou, S Boreiri, M Jaggi, SU Stich*
ICML 2020, 2020
Decentralized deep learning with arbitrary communication compression
A Koloskova*, T Lin*, SU Stich, M Jaggi
ICLR 2020, 2019
An improved analysis of gradient tracking for decentralized machine learning
A Koloskova, T Lin, SU Stich
Advances in Neural Information Processing Systems 34, 11422-11435, 2021
Consensus control for decentralized deep learning
L Kong, T Lin, A Koloskova, M Jaggi, SU Stich
ICML 2021, 2021
A linearly convergent algorithm for decentralized optimization: Sending less bits for free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich
International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021
Sharper convergence guarantees for asynchronous sgd for distributed and federated learning
A Koloskova, SU Stich, M Jaggi
NeurIPS 2022, 2022
Relaysum for decentralized deep learning on heterogeneous data
T Vogels*, L He*, A Koloskova, SP Karimireddy, T Lin, SU Stich, M Jaggi
Advances in Neural Information Processing Systems 34, 28004-28015, 2021
Decentralized local stochastic extra-gradient for variational inequalities
A Beznosikov, P Dvurechenskii, A Koloskova, V Samokhin, SU Stich, ...
Advances in Neural Information Processing Systems 35, 38116-38133, 2022
Efficient greedy coordinate descent for composite problems
SP Karimireddy*, A Koloskova*, SU Stich, M Jaggi
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
Decentralized gradient tracking with local steps
Y Liu, T Lin, A Koloskova, SU Stich
Optimization Methods and Software, 1-28, 2024
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
A Koloskova*, H Hendrikx*, SU Stich
ICML 2023, 2023
Data-heterogeneity-aware mixing for decentralized learning
Y Dandi, A Koloskova, M Jaggi, SU Stich
arXiv preprint arXiv:2204.06477, 2022
Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy
A Koloskova, R McKenna, Z Charles, J Rush, HB McMahan
Advances in Neural Information Processing Systems 36, 2023
On Convergence of Incremental Gradient for Non-Convex Smooth Functions
A Koloskova, N Doikov, SU Stich, M Jaggi
ICML 2024, 2023
Asynchronous SGD on Graphs: a Unified Framework for Asynchronous Decentralized and Federated Optimization
M Even, A Koloskova, L Massoulié
AISTATS 2024, 2023
Decentralized stochastic optimization with client sampling
Z Liu, A Koloskova, M Jaggi, T Lin
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
The Privacy Power of Correlated Noise in Decentralized Learning
Y Allouah, A Koloskova, AE Firdoussi, M Jaggi, R Guerraoui
ICML 2024, 2024
Optimization Algorithms for Decentralized, Distributed and Collaborative Machine Learning
A Koloskova
EPFL, 2024
The system can't perform the operation now. Try again later.
Articles 1–19