[2020/12/02] Paper for efficiently training verifiably robust deep neural networks: “Adversarial Training and Provable Robustness: A Tale of Two objectives” is accepted to AAAI'21.
[2020/08/13] Arxiv draft of “Adversarial Training and Provable Robustness: A Tale of Two objectives” is available.
[2020/07/07] Our paper “Divide and Slide: Layer-Wise Refinement for Output Range Analysis of Deep Neural Networks” has been accepted to EMSOFT'20.
[2020/06/23] Our new tool paper "ReachNN*: A Tool for Reachability Analysis ofNeural-Network Controlled Systems" will appear at ATVA'20.
[2019/08/09] Our new paper "Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems" will appear at ICCAD'19.
[2019/07/10] Our paper “ReachNN: Reachability Analysis of Neural-Network Controlled Systems” has been accepted to appear at EMSOFT’19.
[2019/03/20] Our Paper "Safety-Guided Deep Reinforcement Learning via Online Gaussian Process Estimation" has been accepted at ICLR’19 Workshop on SafeML.
[2017/09/01] I joined Dependable Computing Lab as a Ph.D. student supervised by Prof. Wenchao Li.
[2017/07/01] I graduated from BIT and got the degree of bachelor of engineering with distinction.
[2016/11/01] I finished my research intern at University of California at Irvine with Prof. Mohammad Al Faruque.