mail
editor@irjsrr.com
whatsapp
+91 9634765329
e-ISSN: 3108-1711
logo

International Research Journal of Scientific Reports and Reviews

Published

HALLUCINATION IN LARGE LANGUAGE MODELS: CHARACTERIZATION, DETECTION, AND MITIGATION APPROACHES

Published in July-Dec 2025 (Vol. 1, Issue 1, 2025)

HALLUCINATION IN LARGE LANGUAGE MODELS: CHARACTERIZATION, DETECTION, AND MITIGATION APPROACHES - Issue cover

Abstract

A significant barrier to preserving factual accuracy and dependability in AI-generated outputs is hallucination in large language models. Using a benchmark Kaggle dataset, this work provides a comprehensive evaluation of both advanced transformer-based architectures and traditional machine learning classifiers for hallucination identification. They compared refined transformer models, such as DistilBERT, RoBERTa, and DeBERTa, with baseline models, including Random Forest, SVM, and Logistic Regression. The results show that transformer-based models were more robust and better at understanding context; however, more conventional models, such as Random Forest, achieved a high overall accuracy of 94.10%. DistilBERT struck a wonderful balance between precision and readability. The confusion matrix analysis demonstrated that the models helped reduce false alarms for non-hallucination outputs. The ROC-AUC ratings confirmed the transformers’ precision and capability for identifying a slight rate of semantic discrepancies. Other studies provided supporting evidence that deeper context modeling will provide real benefits to the reliability of detection rates, demonstrated by the reduced hallucinations and assessments of the frequency of errors made. In conclusion, this research shows that combining traditional and modern approaches is beneficial and that tuning with transformer models holds promise for reducing hallucinations. This research provides an example of early steps of increasing trustworthiness and human-like models as AI models.

Authors (4)

Meenal Vardar

Research Scholar, Department o...

View all publications →

Mayank Sharma

Research Associate, Department...

View all publications →

Dimpal Agrawal

Research Associate, Department...

View all publications →

Ankur Vashistha

Founder, Mukti Ecosmart techno...

View all publications →

Download Article

PDF

Best for printing and citation

File size: 0.7 MB
Format: PDF

Download Article

PDF

Best for printing and citation

File size: 0.7 MB
Format: PDF

Article Information

Article ID:
IRJSRR110003
Paper ID:
IRJSRR-01-000003
Published Date:
2026-03-03

Article Impact

Views:2,729
Downloads:1,179

How to Cite

Vardar & Sharma & Agrawal & Vashistha (2026). HALLUCINATION IN LARGE LANGUAGE MODELS: CHARACTERIZATION, DETECTION, AND MITIGATION APPROACHES. International Research Journal of Scientific Reports and Reviews, 1(1), xx-xx. https://irjsrr.com/articles/1

Article Actions

Whatsapp