Abstract: The field now has the technology to use Generative Adversarial Networks (GANs) to carry out fantastic feats at data synthesis, simulation, and content generation for artificial intelligence. Nonetheless, doing so leads us face to face with two of the most severe challenges. These are the interpretability and vulnerability to adversarial attacks. Such constraints have most likely been limiting the already compromised integration of GAN techniques in those areas where public safety is paramount, or wherein trust is indispensable. This research aims for an abstract, invoking the actions of......
Keyword Generative Adversarial Networks (GANs), Explainable AI (XAI), Adversarial AI, Interpretability, Robustness, Deep Learning, Model Security, Trustworthy AI, Adversarial Defense, Latent Space Analysis
[1]. Gupta, S. (2021). Agency, Trust, and Interpretability of Generative Adversarial Networks (GANs) (Doctoral dissertation, PhD thesis, North Carolina State University).
[2]. Sauka, K., Shin, G. Y., Kim, D. W., & Han, M. M. (2022). Adversarial robust and explainable network intrusion detection systems based on deep learning. Applied Sciences, 12(13), 6451.
[3]. Wu, C., Zhang, H., Chen, J., Gao, Z., Zhang, P., Muhammad, K., & Del Ser, J. (2022). Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks. Future Generation Computer Systems, 130, 128-139.
[4]. Wang, S., Zhao, C., Huang, L., Li, Y., & Li, R. (2023). Current status, application, and challenges of the interpretability of generative adversarial network models. Computational Intelligence, 39(2), 283-314.
[5]. Carbone, G. (2023). Robustness and interpretability of neural networks' predictions under adversarial attacks