Seguir
Soufiane Hayou
Soufiane Hayou
Simons Institute for the Theory of Computing, UC Berkeley
Dirección de correo verificada de berkeley.edu - Página principal
Título
Citado por
Citado por
Año
On the impact of the activation function on deep neural networks training
S Hayou, A Doucet, J Rousseau
36th International Conference on Machine Learning (ICML 2019), 2019
1932019
On the selection of initialization and activation function for deep neural networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1805.08266, 2018
822018
Robust Pruning at Initialization
S Hayou, JF Ton, A Doucet, YW Teh
International Conference on Learning Representations (ICLR 2021), 2021
342021
Stable ResNet
S Hayou, E Clerico, B He, G Deligiannidis, A Doucet, J Rousseau
24th International Conference on Artificial Intelligence and Statistics …, 2021
242021
Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
232019
On the impact of the activation function on deep neural networks training
H Soufiane, D Arnaud, R Judith
Proceedings of the 36 th International Conference on Machine Learning, Long …, 2019
14*2019
Training dynamics of deep networks using stochastic gradient descent via neural tangent kernel
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
112019
Pruning untrained neural networks: Principles and analysis
S Hayou, JF Ton, A Doucet, Y Whye Teh
arXiv e-prints, arXiv: 2002.08797, 2020
102020
On the infinite-depth limit of finite-width neural networks
S Hayou
Transactions on Machine Learning Research (arXiv:2210.00688), 2022
62022
Feature learning and signal propagation in deep neural networks
Y Lou, CE Mingard, S Hayou
International Conference on Machine Learning, 14248-14282, 2022
62022
Regularization in ResNet with Stochastic Depth
S Hayou, F Ayed
NeurIPS 2021, arXiv:2106.03091, 2021
52021
The curse of depth in kernel regime
S Hayou, A Doucet, J Rousseau
I (Still) Can't Believe It's Not Better! Workshop at NeurIPS 2021, 41-47, 2022
42022
Connecting Optimization and Generalization via Gradient Flow Path Length
F Liu, H Yang, S Hayou, Q Li
arXiv preprint arXiv:2202.10670, 2022
32022
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning
S Hayou, B He, GK Dziugaite
arXiv preprint arXiv:2110.11804, 2021
22021
On the selection of initialization and activation function for deep neural networks. CoRR abs/1805.08266 (2018)
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1805.08266, 1805
21805
Data pruning and neural scaling laws: fundamental limitations of score-based algorithms
F Ayed, S Hayou
arXiv preprint arXiv:2302.06960, 2023
12023
Width and Depth Limits Commute in Residual Networks
S Hayou, G Yang
ICML 2023 (arXiv preprint arXiv:2302.00453), 2023
12023
The curse of (non) convexity: The case of an Optimization-Inspired Data Pruning algorithm
F Ayed, S Hayou
NeurIPS ICBINB Workshop 2022, 2022
12022
On the Connection Between Riemann Hypothesis and a Special Class of Neural Networks
S Hayou
arXiv preprint arXiv:2309.09171, 2023
2023
From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality
F Liu, H Yang, S Hayou, Q Li
Transactions on Machine Learning Research, 2022
2022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20