mail
editor@irjsrr.com
whatsapp
+91 9634765329
e-ISSN: 3108-1711
logo

International Research Journal of Scientific Reports and Reviews

Dimpal Agrawal

Author Profile
Research Associate, Department of Computer Science, Kalp Laboratories, Mathura, Uttar Pradesh
2
Publications
1
Years Active
5
Collaborators
71
Citations

Publications by Dimpal Agrawal

2 publications found • Active 2026-2026

2026

2 publications

HALLUCINATION IN LARGE LANGUAGE MODELS: CHARACTERIZATION, DETECTION, AND MITIGATION APPROACHES

with Meenal Vardar, Mayank Sharma, Ankur Vashistha
3/3/2026

A significant barrier to preserving factual accuracy and dependability in AI-generated outputs is hallucination in large language models. Using a benchmark Kaggle dataset, this work provides a comprehensive evaluation of both advanced transformer-based architectures and traditional machine learning classifiers for hallucination identification. They compared refined transformer models, such as DistilBERT, RoBERTa, and DeBERTa, with baseline models, including Random Forest, SVM, and Logistic Regression. The results show that transformer-based models were more robust and better at understanding context; however, more conventional models, such as Random Forest, achieved a high overall accuracy of 94.10%. DistilBERT struck a wonderful balance between precision and readability. The confusion matrix analysis demonstrated that the models helped reduce false alarms for non-hallucination outputs. The ROC-AUC ratings confirmed the transformers’ precision and capability for identifying a slight rate of semantic discrepancies. Other studies provided supporting evidence that deeper context modeling will provide real benefits to the reliability of detection rates, demonstrated by the reduced hallucinations and assessments of the frequency of errors made. In conclusion, this research shows that combining traditional and modern approaches is beneficial and that tuning with transformer models holds promise for reducing hallucinations. This research provides an example of early steps of increasing trustworthiness and human-like models as AI models.

Robust and Structure-Aware Visual Representation Learning for Reliable Deep Neural Networks

with Mayank Sharma Mayank, Kirti Sharma, Mayank Sharma Mayank, Kirti Sharma
3/3/2026
pp. 139-154

The focus of this study's strong and structure-aware visual representation learning framework is medical picture analysis, which aims to make deep neural networks more reliable, resilient, and easy to understand. To transcend accuracy-focused evaluation, edge-guided structural oversight, corruption-sensitive robustness assessment, and calibration-oriented reliability analysis are introduced. The structure-aware MobileNetV3 does well on the Chest X-Ray dataset, with an accuracy of 0.8574, a high average confidence of 0.9284, and a controlled Expected Calibration Error (ECE) of 0.0710. The structure-aware ResNet-18 achieved an accuracy of 0.9071 and a low ECE of 0.0160. DenseNet121 had an accuracy of 0.8894 and an ECE of 0.0319. A robustness study reveals that performance trends remain consistent with ROC-AUC values exceeding 0.92, even after multiple changes, including the presence of Gaussian noise and occlusion. Grad-CAM explainability analysis demonstrates an anatomically directed emphasis on pulmonary regions, reinforcing structural priors. To evaluate the system's ability to work outside of medical imaging, it is tested on CIFAR-10. The robust model gets 71.92% clean accuracy, and this number goes up a lot when noise and blur corruptions are included. The results indicate that structure-aware and reliability-driven learning enhances the behavior of trustworthy models, making the proposed framework appropriate for real-world, safety-critical visual recognition systems.

Whatsapp