Robust Deep Neural Networks
Explore 1 research publication tagged with this keyword
Publications Tagged with "Robust Deep Neural Networks"
1 publication found
2026
1 publicationRobust and Structure-Aware Visual Representation Learning for Reliable Deep Neural Networks
The focus of this study's strong and structure-aware visual representation learning framework is medical picture analysis, which aims to make deep neural networks more reliable, resilient, and easy to understand. To transcend accuracy-focused evaluation, edge-guided structural oversight, corruption-sensitive robustness assessment, and calibration-oriented reliability analysis are introduced. The structure-aware MobileNetV3 does well on the Chest X-Ray dataset, with an accuracy of 0.8574, a high average confidence of 0.9284, and a controlled Expected Calibration Error (ECE) of 0.0710. The structure-aware ResNet-18 achieved an accuracy of 0.9071 and a low ECE of 0.0160. DenseNet121 had an accuracy of 0.8894 and an ECE of 0.0319. A robustness study reveals that performance trends remain consistent with ROC-AUC values exceeding 0.92, even after multiple changes, including the presence of Gaussian noise and occlusion. Grad-CAM explainability analysis demonstrates an anatomically directed emphasis on pulmonary regions, reinforcing structural priors. To evaluate the system's ability to work outside of medical imaging, it is tested on CIFAR-10. The robust model gets 71.92% clean accuracy, and this number goes up a lot when noise and blur corruptions are included. The results indicate that structure-aware and reliability-driven learning enhances the behavior of trustworthy models, making the proposed framework appropriate for real-world, safety-critical visual recognition systems.
