Follow
Tianxiang Gao (高天翔)
Tianxiang Gao (高天翔)
Department of Computer Science, Iowa State University
Verified email at iastate.edu
Title
Cited by
Cited by
Year
A global convergence theory for deep ReLU implicit networks via over-parameterization
T Gao, H Liu, J Liu, H Rajan, H Gao
International Conference on Learning Representations (ICLR 2022), 2022
162022
Randomized bregman coordinate descent methods for non-lipschitz optimization
T Gao, S Lu, J Liu, C Chu
arXiv preprint arXiv:2001.05202, 2020
162020
Did: Distributed incremental block coordinate descent for nonnegative matrix factorization
T Gao, C Chu
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
112018
Minimum-volume-regularized weighted symmetric nonnegative matrix factorization for clustering
T Gao, S Olofsson, S Lu
2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP …, 2016
102016
Hybrid classification approach of SMOTE and instance selection for imbalanced datasets
T Gao
Iowa State University, 2015
92015
Leveraging two reference functions in block bregman proximal gradient descent for non-convex and non-lipschitz problems
T Gao, S Lu, J Liu, C Chu
arXiv preprint arXiv:1912.07527, 2019
42019
On the convergence of randomized Bregman coordinate descent for non-Lipschitz composite problems
T Gao, S Lu, J Liu, C Chu
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
32021
Gradient descent optimizes infinite-depth relu implicit networks with linear widths
T Gao, H Gao
arXiv preprint arXiv:2205.07463, 2022
22022
Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models
T Gao, X Huo, H Liu, H Gao
Neural Information Processing System s (NeurIPS 2023), 2023
12023
On the optimization and generalization of overparameterized implicit neural networks
T Gao, H Gao
arXiv preprint arXiv:2209.15562, 2022
12022
Infinitely Deep Residual Networks: Unveiling Wide Neural ODEs as Gaussian Processes
T Gao, X Huo, H Liu, H Gao
2023
The system can't perform the operation now. Try again later.
Articles 1–11