International Conference on Machine Learning, Artificial Intelligence and Data Science

KAIQI ZHAO Profile

KAIQI ZHAO

KAIQI ZHAO

Biography

Dr. Kaiqi Zhao is a Tenure-Track Assistant Professor in the Computer Science and Engineering Department at Oakland University, where she also serves as the Director of the Efficient AI Lab. She earned her Ph.D. in Computer Science from Arizona State University in 2024, with a research focus on efficient AI and deep learning model compression. Her expertise lies in deep learning, distributed computing, and AI systems optimization, with impactful contributions to both academia and industry. She has held multiple applied scientist internships at Amazon Web Services. Dr. Zhao has authored peer-reviewed publications in top-tier conferences such as AISTATS, ICASSP, InterSpeech, and CVPR, and her work has been integrated into real-world systems, including Amazon Alexa. She is an active reviewer for prestigious AI venues such as NeurIPS, ICML, ICLR, and AAAI, and has received several accolades, including the NSF Best Poster Award and Oakland University's Faculty Research Fellowship. Through her teaching, mentorship, and innovative research, Dr. Zhao continues to advance the frontier of efficient AI technologies.

Research Interest

Efficient AI, Deep Learning Model Compression (Knowledge Distillation, Pruning, Quantization), Distributed/Cloud/Edge Computing

Abstract

Deep learning models are increasingly employed by smart devices on the edge to support important applications such as real-time virtual assistants and privacy-preserving healthcare. However, deploying state-of-the-art (SOTA) deep learning models on devices faces multiple serious challenges. First, it is infeasible to deploy large models on resource constrained edge devices whereas small models cannot achieve the SOTA accuracy. Second, it is difficult to customize the models according to diverse application requirements in accuracy and speed and diverse capabilities of edge devices. This talk presents several novel solutions to comprehensively address the above challenges through automated and improved model compression, revolutionizing Edge AI.