CSE - Bridging AI and Cybersecurity for Malware Detection, AI Security, and Study of the Evolving Underground Ecosystem
Nowadays, society's overwhelming reliance on complex cyberspace makes its security more important than ever. By seamlessly integrating my expertise on the interdisciplinary areas in both AI and cybersecurity, our work mainly focuses on answering the following research questions: (1) How can we advance AI-driven innovations to protect users against evolving malware attacks? Malware (i.e., malicious software) has been used as a major weapon by cyberthreat actors to launch various attacks, such as the Colonial Pipeline shutdown forced by the ransomware attack in May 2021. By the long-term collaboration with industry partners, we are addressing several key challenges in malware analysis and detection by the development of: i) advanced static/dynamic analysis techniques for effective feature representations of binary executables in both PC and mobile platforms; ii) innovative models to abstract the complex ecosystem of application (app) development; and iii) novel yet effective AI-driven techniques for large-scale malware detection. (2) How can we improve resilience of machine learning models against adversarial attacks? As machine learning (ML) models have been deployed in various applications, the incentive for defeating them increases. In light of this, ranging from shallow learning to deep learning (including deep neural networks and graph neural networks), we conduct original research works by developing novel attack perception models based on diverse mixture of experts and adaptive defenses using randomization techniques and regularization algorithms designed based on min-max optimization strategy to improve resilience of ML models against both poisoning and evasion attacks. (3) How can we have a deep understanding of the evolving underground ecosystem for effective intervention against cybercrimes? Driven by considerable profits, cybercriminals have used various techniques to exploit weak links of cyberspace, who are organized within the online underground ecosystem - i.e., a loose federation of specialists selling capabilities, services, and resources explicitly tailored to Internet abuses. Within the ecosystem, underground markets emerging in the forms of underground forums and dark webs have played a central role for them to exchange knowledge and trade in illicit products or services.
In addition to the above topic, we aim to answer "How can we enable trustworthy AI in response to the safety issues"? Besides its bright side, AI can also be abused to do harm (e.g., casualties caused by misleading traffic signals, suicides encouraged by AI generated conversations). Built upon our extensive research on the interdisciplinary areas of AI and cybersecurity, we will explore the new yet challenging direction of AI safety aiming at preventing AI from doing harm.