Tulasi Sai Charan Sharma Gaddam

Paper Publications

untitled design photoroom

Research Paper Overview

Adversarial Vulnerability Analysis of Deep Neural Network-Based Intrusion Detection Systems Using FGSM and PGD Attacks

Author:

Tulasi Sai Charan Sharma Gaddam, Sacred Heart University, Connecticut, USA

Keywords:

Adversarial machine learning, intrusion detection, evasion attacks, data poisoning, model robustness, cybersecurity threats

jax9y0 5ewe0hhtxym0s8.jpg

Abstract (Short Summary)

Deep learning significantly improves Intrusion Detection Systems (IDS) by automatically recognizing complex attack patterns in network traffic, but these models are highly vulnerable to adversarial manipulation.
This paper analyzes how two powerful gradient-based attacks—Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD)—can degrade a deep neural network–based IDS on NSL‑KDD and CICIDS2017, and shows how increasing perturbation levels cause systematic drops in accuracy, precision, recall, and F1‑score.

Methodology

  • A deep neural network (DNN) IDS is trained on NSL‑KDD and CICIDS2017, using normalized and encoded network traffic features for binary intrusion detection.
  • Gradient-based adversarial examples are generated with FGSM and iterative PGD under a white‑box threat model, then evaluated using accuracy, precision, recall, F1‑score, ROC‑AUC, EER, and Attack Success Rate (ASR).
o8lwshfxiu6jzuyjflrjr.jpg
n7o5mkctr6k9efszpgakv.jpg

Key Findings

  • On clean data, the DNN IDS achieves 97.80% accuracy on NSL‑KDD and 98.10% on CICIDS2017, demonstrating strong baseline detection performance.
  • Under FGSM and PGD attacks with ε = 0.1, accuracy drops to as low as 75.30%–79.45% on NSL‑KDD and 78.40%–82.65% on CICIDS2017, with FGSM/PGD ASR reaching up to 22.50%.

  • Increasing the perturbation budget from ε = 0.01 to 0.20 causes a steep decline across all metrics, with F1‑score falling close to 62% at the highest perturbation level.

Security Impact

  • The results show that even high‑accuracy IDS models can be systematically bypassed by small, carefully designed adversarial changes to network traffic.
  • R2L and U2R classes are particularly vulnerable, indicating that behavior‑driven and privilege‑related features are highly sensitive to perturbations.

  • The work highlights the need to integrate adversarial defenses such as robust training, input preprocessing, ensembles, and standardized robustness evaluation into future IDS design.
wxhwudh6 r0eomte9o8ex.jpg

Masters Projects and this paper is ready to publish in IEEE publication in 20days

Download The Paper

You can access the full paper, including methodology, experiments, and detailed results, in PDF format below. IEEE Journal Publication.

Scroll to Top