Defending against model stealing via verifying embedded external features Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao Proceedings of the AAAI conference on artificial intelligence 36 (2), 1464-1472, 2022 | 47 | 2022 |
A fine-grained differentially private federated learning against leakage from gradients L Zhu, X Liu, Y Li, X Yang, ST Xia, R Lu IEEE Internet of Things Journal 9 (13), 11500-11512, 2021 | 21 | 2021 |
Not all samples are born equal: Towards effective clean-label backdoor attacks Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST Xia Pattern Recognition 139, 109512, 2023 | 19 | 2023 |
Move: Effective and harmless ownership verification via embedded external features Y Li, L Zhu, X Jia, Y Bai, Y Jiang, ST Xia, X Cao arXiv preprint arXiv:2208.02820, 2022 | 6 | 2022 |
GDST: Global Distillation Self-Training for Semi-Supervised Federated Learning X Liu, L Zhu, ST Xia, Y Jiang, X Yang 2021 IEEE Global Communications Conference (GLOBECOM), 1-6, 2021 | 6 | 2021 |
Defending against model stealing via verifying embedded external features L Zhu, Y Li, X Jia, Y Jiang, ST Xia, X Cao ICML 2021 Workshop on Adversarial Machine Learning, 2021 | 6 | 2021 |
The Robust and Harmless Model Watermarking Y Li, L Zhu, Y Bai, Y Jiang, ST Xia Digital Watermarking for Machine Learning Model: Techniques, Protocols and …, 2022 | | 2022 |