Shuai Zhang
Shuai Zhang
Assistant Professor, Data Science
2117 Guttenberg Information Technologies Center (GITC)
About Me
I am an assistant professor in the Department of Data Science, New Jersey Institute of Technology (NJIT), starting at Fall 2023. I earned my Ph.D. in Electrical and Computer Engineering from Rensselaer Polytechnic Institute under the supervision of Dr. Meng Wang. Previously, I was fortunately to work with IBM Thomas J. Watson Research Center and MIT-IBM Watson AI Lab.
Education
Ph.D.; Rensselaer Polytechnic Institute; Electrical and Computer Engineering; 2021
B.E.; University of Science and Technology of China; ; 2016
B.E.; University of Science and Technology of China; ; 2016
2025 Spring Courses
DS 726 - INDEPENDENT STUDY II
DS 675 - MACHINE LEARNING
DS 700B - MASTER'S PROJECT
DS 725 - INDEPENDENT STUDY I
DS 790A - DOCT DISSERTATION & RES
DS 792B - PRE-DOCTORAL RESEARCH
DS 701B - MASTER'S THESIS
DS 675 - MACHINE LEARNING
DS 700B - MASTER'S PROJECT
DS 725 - INDEPENDENT STUDY I
DS 790A - DOCT DISSERTATION & RES
DS 792B - PRE-DOCTORAL RESEARCH
DS 701B - MASTER'S THESIS
Past Courses
CS 732: ADVANCED MACHINE LEARNING
DS 675: MACHINE LEARNING
DS 675: MACHINE LEARNING
Research Interests
My research has been focused on the theoretical foundations of deep learning and the design of principled and fast algorithms for better, safer, and more efficient AI applications. My current research focuses on the theoretical foundation of foundation models and parameter-efficient transfer learning.
Conference Paper
“Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks ”
ICLR, January (1st Quarter/Winter), 2023.
“How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis”
ICLR, 2022.
“Why lottery ticket wins? a theoretical perspective of sample complexity on sparse neural networks”
Advances in Neural Information Processing Systems (NeurIPS), 2021.
“Fast learning of graph neural networks with guaranteed generalizability: one-hidden-layer case”
International Conference on Machine Learning (ICML), 2020.
ICLR, January (1st Quarter/Winter), 2023.
“How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis”
ICLR, 2022.
“Why lottery ticket wins? a theoretical perspective of sample complexity on sparse neural networks”
Advances in Neural Information Processing Systems (NeurIPS), 2021.
“Fast learning of graph neural networks with guaranteed generalizability: one-hidden-layer case”
International Conference on Machine Learning (ICML), 2020.