Follow
Guannan Qu
Title
Cited by
Cited by
Year
Harnessing smoothness to accelerate distributed optimization
G Qu, N Li
IEEE Transactions on Control of Network Systems 5 (3), 1245-1260, 2017
7602017
Reinforcement learning for selective key applications in power systems: Recent advances and future challenges
X Chen, G Qu, Y Tang, S Low, N Li
IEEE Transactions on Smart Grid 13 (4), 2935-2958, 2022
322*2022
Accelerated distributed Nesterov gradient descent
G Qu, N Li
IEEE Transactions on Automatic Control 65 (6), 2566-2581, 2019
294*2019
Optimal scheduling of battery charging station serving electric vehicles based on battery swapping
X Tan, G Qu, B Sun, N Li, DHK Tsang
IEEE Transactions on Smart Grid 10 (2), 1372-1384, 2017
1712017
On the exponential stability of primal-dual gradient dynamics
G Qu, N Li
IEEE Control Systems Letters 3 (1), 43-48, 2018
1572018
Optimal distributed feedback voltage control under limited reactive power
G Qu, N Li
IEEE Transactions on Power Systems 35 (1), 315-331, 2019
1482019
Real-time decentralized voltage control in distribution networks
N Li, G Qu, M Dahleh
2014 52nd Annual Allerton Conference on Communication, Control, and …, 2014
1452014
Scalable reinforcement learning for multiagent networked systems
G Qu, A Wierman, N Li
Operations Research 70 (6), 3601-3628, 2022
134*2022
A random forest method for real-time price forecasting in New York electricity market
J Mei, D He, R Harley, T Habetler, G Qu
2014 IEEE PES general meeting| conference & exposition, 1-5, 2014
1332014
Finite-Time Analysis of Asynchronous Stochastic Approximation and -Learning
G Qu, A Wierman
Conference on Learning Theory, 3185-3205, 2020
1312020
Online optimization with predictions and switching costs: Fast algorithms and the fundamental limit
Y Li, G Qu, N Li
IEEE Transactions on Automatic Control 66 (10), 4761-4768, 2020
128*2020
Distributed greedy algorithm for multi-agent task assignment problem with submodular utility functions
G Qu, D Brown, N Li
Automatica 105, 206-215, 2019
87*2019
Learning optimal power flow: Worst-case guarantees for neural networks
A Venzke, G Qu, S Low, S Chatzivasileiadis
2020 IEEE International Conference on Communications, Control, and Computing …, 2020
822020
Scalable multi-agent reinforcement learning for networked systems with average reward
G Qu, Y Lin, A Wierman, N Li
Advances in Neural Information Processing Systems 33, 2074-2086, 2020
802020
Distributed optimal voltage control with asynchronous and delayed communication
S Magnússon, G Qu, N Li
IEEE Transactions on Smart Grid 11 (4), 3469-3482, 2020
742020
Multi-agent reinforcement learning in stochastic networked systems
Y Lin, G Qu, L Huang, A Wierman
Advances in neural information processing systems 34, 7825-7837, 2021
65*2021
Stability constrained reinforcement learning for real-time voltage control
Y Shi, G Qu, S Low, A Anandkumar, A Wierman
2022 American Control Conference (ACC), 2715-2721, 2022
472022
Perturbation-based regret analysis of predictive control in linear time varying systems
Y Lin, Y Hu, G Shi, H Sun, G Qu, A Wierman
Advances in Neural Information Processing Systems 34, 5174-5185, 2021
412021
Voltage control using limited communication
S Magnússon, G Qu, C Fischione, N Li
IEEE Transactions on Control of Network Systems 6 (3), 993-1003, 2019
382019
Robustness and consistency in linear quadratic control with untrusted predictions
T Li, R Yang, G Qu, G Shi, C Yu, A Wierman, S Low
Proceedings of the ACM on Measurement and Analysis of Computing Systems 6 (1 …, 2022
352022
The system can't perform the operation now. Try again later.
Articles 1–20