Research Experience
- Attacks Meet Interpretability (AmI) Evaluation
- Intrusion Detection for Cellular Network
- Landscape of Neural Network and Its Robustness
Attacks Meet Interpretability (AmI) Evaluation
Penn State Prof. Mehnaz’s Lab
Advisor: Shagufta Mehnaz
January - August 2023
- Motivation: Attackers can exploit model explanations, initially intended to enhance transparency in machine learning systems, potentially empowering them to launch more powerful attacks. I investigated the application of model explanations in generating and detecting adversarial examples
- Conducted experiments that revealed the limitations of a paper using model explanation to detect adversarial samples. (“Attacks Meet Interpretability: Attribute-Steered Detection of Adversarial Samples”)
- Discovered a significant flaw in Nicholas Carlini’s evaluation of the AmI model. (“Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples?”)
- Proposed and implemented a fair evaluation of AmI; Currently writing a report that covers our discoveries
Intrusion Detection for Cellular Network
Penn State Systems and Network Security (SyNSec) Research Group
Advisor: Syed Rafiul Hussain
May 2021 - April 2023
- Motivation: Because the increasing scale and complexity of modern systems (e.g., cellular networks) make traditional rule-based defensive approaches ineffective, I applied cutting-edge machine learning techniques to protect cellular networks, the critical infrastructure of the digital world, from evolving attacks
- Designed and implemented a novel machine learning architecture in PyTorch that uses LSTM on top of BERT, which considers relationships between segments of messages (i.e., inter-window context) for intrusion detection
- Applied contrastive learning for BERT-based window encoder - encodes each window to an embedding; Trained LSTM-based message tagger that assigned benign/attack label to each incoming message
- Achieved 95.64% test accuracy and 95.44% test F1 score on 13 common cellular network attacks
- Selected for Student Engagement Network Grant and Schreyer Honors College’s Summer Research Grant
Landscape of Neural Network and Its Robustness
Penn State Center for Computational Mathematics and Applications
Advisors: Jinchao Xu and Jonathan Seigel
July 2020 - August 2021
- Motivation: In an attempt to understand why adversarial examples can fool state-of-the-art neural networks, I studied the neural network as a function of its input space (e.g., pixels in an image) to evaluate the impact of certain pixels on the prediction of each class (e.g., digit 0, cat)
- Found that neural network functions (e.g., ResNet, VGG, DenseNet) become more sensitive to changes in input pixels as training progresses, which explains why even slight changes in pixel values can cause the neural network to produce very different results
- Selected for College of Engineering Equity REU and Multi-Campus Summer REU research scholarship