Alumnus Yingjie Lao has been awarded the CAREER award by the National Science Foundation’s (NSF) Faculty Early Career Development (CAREER) Program. One of the most prestigious awards instituted by the NSF, it recognizes and supports faculty early in their careers who show the potential to “serve as academic role models in research and education and to lead advances in the mission of their department or organization.” Yingjie earned his doctoral degree in 2015 under the guidance of Professor Keshab Parhi.
Yingjie’s Project: Protection of Deep Learning Systems
As artificial intelligence (AI) approaches human-level performance, its successful deployment requires robust protection from adversarial attacks. While there has been significant progress on strengthening AI algorithms, there is a gap in the systematic study of hardware oriented vulnerabilities and related countermeasures. In his project titled, “Protecting Deep Learning Systems against Hardware-Oriented Vulnerabilities,” Yingjie’s goal is to explore novel hardware-oriented adversarial AI concepts, and develop defense strategies against such vulnerabilities to protect next-generation AI systems.
The project has four key areas: exploit new adversarial attacks that feature the design of an algorithm-hardware collaborative backdoor attack; develop methodologies to incorporate the hardware aspect into defense against vulnerabilities in the untrusted semiconductor supply chain; develop novel signature embedding frameworks to protect the integrity of deep neural network models in the untrusted model building supply chain; and finally model recovery strategies to mitigate hardware-oriented fault attacks in the untrusted user-space.
Practical and Educational Outcomes
Upon successful completion of the project, Yingjie hopes to offer novel methodologies that ensure trust in AI systems from both the algorithm and hardware perspectives, and provide for future commercial and national defense needs. His intent is to accelerate advances in AI applications across diverse sectors including healthcare, autonomous vehicles, and Internet of things (IoT), and trigger widespread implementation of AI in mobile and edge devices.
Yingjie plans to integrate new theories and techniques developed throughout the course of the project into undergraduate and graduate education. He also hopes to use them to raise public awareness, and promote understanding of the importance of AI security.
Yingjie Lao’s dissertation is titled “Authentication and Obfuscation of Digital Signal Processing Integrated Circuits,” and focuses on authentication and obfuscation based techniques for protecting hardware devices. He has developed and implemented several novel hardware security primitives, including reconfigurable Physical Unclonable Functions (PUFs), modified feed-forward PUFs, two-arbiter PUF, and Beat Frequency Detector (BFD) based True Random Number Generator (TRNG). Currently, Yingjie is an assistant professor at Clemson University’s Holcombe Department of Electrical and Computer Engineering.