Indian banks accelerating AI adoption for financial crime compliance: Report

Dec 09, 2025

New Delhi [India], December 9 : The Indian banks are rapidly integrating machine learning models into Financial Crime Compliance (FCC) operations amid rising fraud and regulatory scrutiny, making the traditional rule-based systems inadequate, KPMG said in a report.
The report highlighted that the legacy manual and threshold-based methods are "progressively losing effectiveness" against sophisticated financial crime.
This is prompting financial institutions to shift to AI-driven frameworks for Anti-Money Laundering (AML), fraud detection and customer risk assessment, it said.
Notably, the KPMG report also highlighted that the shift towards AI is being accelerated by regulatory expectations, including RBI's FREE-AI framework and SEBI's guidelines, which call for responsible and explainable AI systems.
It added that financial institutions are moving from pilot implementations to "full-scale machine learning integration" across the customer lifecycle.
The report further cited RBI Innovation Hub's MuleHunter.AI tool, noting that over 15 Indian banks now use it and that one major bank achieved 95% accuracy in detecting mule accounts.
Highlighting the use of AI to tackle fraud globally, the report, citing the World Economic Forum, said that global financial services have already spent USD 35 billion on AI adoption through 2023, with investment projected to reach USD 97 billion by 2027.
The report highlighted that rule-based Financial Crime Compliance (FCC) systems face high false positives, lack adaptability to emerging laundering typologies, and cannot scale with rising transaction volumes.
In contrast, machine learning models enable real-time monitoring, anomaly detection, behavioural analytics and automated drafting of Suspicious Activity Reports using natural language processing.
KPMG also noted increasing regulatory focus on model risk management, emphasising the need for independent validation to address opacity, bias, data quality issues, and vulnerability to adversarial manipulation.
The report warned that AI-driven systems, if not properly stress-tested, could amplify systemic risks.