Enhancing Image Quality in Facial Recognition Systems with GAN-Based Reconstruction Techniques
DOI:
https://doi.org/10.34148/teknika.v14i1.1180Keywords:
Facial Recognition Systems, Image Reconstruction, Generative Adversarial Networks (GANs), PSNR, SSIMAbstract
Facial recognition systems are pivotal in modern applications such as security, healthcare, and public services, where accurate identification is crucial. However, environmental factors, transmission errors, or deliberate obfuscations often degrade facial image quality, leading to misidentification and service disruptions. This study employs Generative Adversarial Networks (GANs) to address these challenges by reconstructing corrupted or occluded facial images with high fidelity. The proposed methodology integrates advanced GAN architectures, multi-scale feature extraction, and contextual loss functions to enhance reconstruction quality. Six experimental modifications to the GAN model were implemented, incorporating additional residual blocks, enhanced loss functions combining adversarial, perceptual, and reconstruction losses, and skip connections for improved spatial consistency. Extensive testing was conducted using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to quantify reconstruction quality, alongside face detection validation using SFace. The final model achieved an average PSNR of 26.93 and an average SSIM of 0.90, with confidence levels exceeding 0.55 in face detection tests, demonstrating its ability to preserve identity and structural integrity under challenging conditions, including occlusion and noise. The results highlight that advanced GAN-based methods effectively restore degraded facial images, ensuring accurate face detection and robust identity preservation. This research provides a significant contribution to facial image processing, offering practical solutions for applications requiring high-quality image reconstruction and reliable facial recognition.
Downloads
References
[1] M. Sajjad et al., “A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines,” Alexandria Eng. J., vol. 68, pp. 817–840, 2023, doi: https://doi.org/10.1016/j.aej.2023.01.017.
[2] N. Jain, J. Hawari, P. Jha, P. C. Nair, and N. Sampath, “Interpretable Deep Learning for Facial Feature Detection: A Comprehensive Study on Face and Eyes Recognition with LIME Explanations,” in 2024 IEEE 9th International Conference for Convergence in Technology (I2CT), 2024, pp. 1–7. doi: 10.1109/I2CT61223.2024.10544194.
[3] L. Wang, “Protection of personal information in the application environment of face recognition technology,” Front. Soc. Sci. Technol., vol. 5, no. 15, pp. 63–72, 2023, doi: 10.25236/fsst.2023.051512.
[4] A. N and K. Anusudha, “Real time face recognition system based on YOLO and InsightFace,” Multimed. Tools Appl., vol. 83, no. 11, pp. 31893–31910, 2024, doi: 10.1007/s11042-023-16831-7.
[5] R. Munir, “Pengolahan Citra Digital Dengan Pendekatan Algoritmik,” 2011. [Online]. Available: https://api.semanticscholar.org/CorpusID:196067817
[6] H. Ali, M. Hariharan, A. H. Adom, S. K. Zaaba, M. Elshaikh, and S. Yaacob, “Facial emotion recognition under noisy environment using empirical mode decomposition,” in 2016 3rd International Conference on Electronic Design (ICED), 2016, pp. 476–480. doi: 10.1109/ICED.2016.7804691.
[7] C. Busch, “Challenges for automated face recognition systems,” Nat. Rev. Electr. Eng., vol. 1, no. 11, pp. 748–757, 2024, doi: 10.1038/s44287-024-00094-x.
[8] M. De Marsico, M. Nappi, D. Riccio, and H. Wechsler, “Robust Face Recognition for Uncontrolled Pose and Illumination Changes,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 43, no. 1, pp. 149–163, 2013, doi: 10.1109/TSMCA.2012.2192427.
[9] R. H. Al-Abboodi and A. A. Al-Ani, “Deep Learning Approach for Face Recognition Applied in IoT Environment- Comprehensive Review,” in 2024 Seventh International Women in Data Science Conference at Prince Sultan University (WiDS PSU), 2024, pp. 191–197. doi: 10.1109/WiDS-PSU61003.2024.00047.
[10] M. Motmaen, M. Mohrekesh, M. Akbari, N. Karimi, and S. Samavi, “Image Inpainting by Hyperbolic Selection of Pixels for Two-Dimensional Bicubic Interpolations,” 26th Iran. Conf. Electr. Eng. ICEE 2018, pp. 665–669, 2018, doi: 10.1109/ICEE.2018.8472408.
[11] D. K. Basha and T. Venkateswarlu, “Image Restoration by Linear Regression for Gaussian Noise Removal from Natural Images,” Int. J. Innov. Technol. Explor. Eng., vol. 8, no. 11S2, pp. 126–130, 2019, doi: 10.35940/ijitee.k1020.09811s219.
[12] P. Chai, L. Hou, G. Zhang, Q. Tushar, and Y. Zou, “Generative adversarial networks in construction applications,” Autom. Constr., vol. 159, p. 105265, 2024, doi: https://doi.org/10.1016/j.autcon.2024.105265.
[13] S. Islam et al., “Generative Adversarial Networks (GANs) in Medical Imaging: Advancements, Applications, and Challenges,” IEEE Access, vol. 12, pp. 35728–35753, 2024, doi: 10.1109/ACCESS.2024.3370848.
[14] Y. Lv, J. Duan, and X. Li, “A survey on modeling for behaviors of complex intelligent systems based on generative adversarial networks,” Comput. Sci. Rev., vol. 52, p. 100635, 2024, doi: https://doi.org/10.1016/j.cosrev.2024.100635.
[15] A. Zargaran, S. Sousi, S. P. Glynou, H. Mortada, D. Zargaran, and A. Mosahebi, “A systematic review of generative adversarial networks (GANs) in plastic surgery,” J. Plast. Reconstr. Aesthetic Surg., vol. 95, pp. 377–385, 2024, doi: https://doi.org/10.1016/j.bjps.2024.04.007.
[16] ISBM College of Engineering , Pune Multidisciplinary Emerging Trends in Engineering and Technology. doi: 10.17492/jpi.ISBM.082401.
[17] M. Velayutham, V. Chin, and W. Shen, “Restoration of Historical Illustrations using Generative Adversarial Networks,” vol. 8, no. 3, 2024.
[18] R. Elanwar and M. Betke, “Generative adversarial networks for handwriting image generation: a review,” Vis. Comput., 2024, doi: 10.1007/s00371-024-03534-9.
[19] B. Kc, S. Sapkota, and A. Adhikari, “Generative Adversarial Networks in Anomaly Detection and Malware Detection: A Comprehensive Survey,” Adv. Artif. Intell. Res., vol. 4, no. 1, pp. 18–35, 2024, doi: 10.54569/aair.1442665.
[20] Y. Wu et al., “Mineral prospecting mapping with conditional generative adversarial network augmented data,” Ore Geol. Rev., vol. 163, p. 105787, 2023, doi: https://doi.org/10.1016/j.oregeorev.2023.105787.
[21] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 3730–3738, 2015, doi: 10.1109/ICCV.2015.425.
[22] X. Gao, W. He, and Y. Hu, “Modeling of meandering river deltas based on the conditional generative adversarial network,” J. Pet. Sci. Eng., vol. 193, p. 107352, 2020, doi: https://doi.org/10.1016/j.petrol.2020.107352.
[23] A. S. Satyawan, S. Fuady, A. Mitayani, and Y. W. Sari, “HOG Based Pedestrian Detection System for Autonomous Vehicle Operated in Limited Area,” in 2021 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), 2021, pp. 147–152. doi: 10.1109/ICRAMET53537.2021.9650473.
[24] F. Boutros, M. Huber, P. Siebke, T. Rieber, and N. Damer, “SFace: Privacy-friendly and Accurate Face Recognition using Synthetic Data,” 2022 IEEE Int. Jt. Conf. Biometrics, IJCB 2022, 2022, doi: 10.1109/IJCB54206.2022.10007961.
[25] M. Krichen, “Generative Adversarial Networks,” in 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2023, pp. 1–7. doi: 10.1109/ICCCNT56998.2023.10306417.
[26] S. Gao, J. Tian, X. Fu, Y. Li, B. Wang, and L. Zhao, “Complete loss distribution model of GaN HEMTs considering the influence of parasitic parameters,” J. Power Electron., vol. 24, no. 1, pp. 119–129, 2024, doi: 10.1007/s43236-023-00710-3.
[27] R. Al Mdanat, R. Georgious, S. Saeed, and J. García, “Power Loss Modeling and Impact of Current Measurement on the Switching Characterization in Enhancement-Mode GaN Transistors,” IEEE J. Emerg. Sel. Top. Power Electron., vol. 12, no. 6, pp. 5404–5420, 2024, doi: 10.1109/JESTPE.2024.3431270.
[28] J. Nilsson and T. Akenine-Möller, Understanding SSIM. 2020. doi: 10.48550/arXiv.2006.13846.
[29] T. B. Taha, “Modified PSNR Metric for Watermarked-Image Assessment,” in 2023 20th International Multi-Conference on Systems, Signals & Devices (SSD), 2023, pp. 414–418. doi: 10.1109/SSD58187.2023.10411302.
[30] D. Feature and R. Loss, “Facial inpainting,” JUTI J. Ilm. Teknol. Inf., vol. 18, no. 2, pp. 171–178, 2020

Downloads
Published
Issue
Section
License
Copyright (c) 2025 Teknika

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.