This paper investigates the causes, implications, and mitigation strategies of AI hallucinations, with a focus on generative AI systems. This paper examines the phenomenon of AI hallucinations in large language models, analyzing root causes and evaluating mitigation strategies. We synthesize insights from recent academic research and industry findings to explain how hallucinations often arise due to problems in the data used to train language models, limitations in model architecture, and the way large language models (LLMs) generate text. Through a systematic review of current literature, we identify key patterns in how hallucinations emerge and examine the growing concern about their impact as AI becomes more embedded in decision-making systems. We identify core contributors such as data quality issues, model complexity, lack of grounding, and limitations inherent in the generative process. The risks are examined in various domains, including legal, business, and user-facing applications, highlighting consequences like misinformation, trust erosion, and productivity loss. To address these challenges, we survey mitigation techniques including data curation, retrieval-augmented generation (RAG), prompt engineering, fine-tuning, multi-model systems, and human-in-the-loop oversight. Our analysis draws from a wide range of academic and industry sources, offering both theoretical understanding and practical insights for AI practitioners. This is a pure review paper and all results are from the cited references.
@artical{s1462025ijcatr14061003,
Title = "Comprehensive Review of AI Hallucinations: Impacts and Mitigation Strategies for Financial and Business Applications",
Journal ="International Journal of Computer Applications Technology and Research (IJCATR)",
Volume = "14",
Issue ="6",
Pages ="38 - 50",
Year = "2025",
Authors ="Satyadhar Joshi"}