,

AI Can Uncover Novel Fraud, Even in Real-Time Payments

dominic11047@gmail.com Avatar

One of the primary concerns regarding faster payments is the increased risk of fraud, yet artificial intelligence could help alleviate these issues.

A study by the Bank for International Settlements (BIS) and the Bank of England assessed AI’s capability to
detect advanced fraudulent activities
carried out by cybercriminals.

The experiments took place in a simulated environment based on data from millions of bank accounts and transactions, designed to mimic real-time retail payments.

Dubbed Project Hertha, the study revealed that AI models are effective fraud detection tools, excelling at identifying novel patterns of financial crime. According to BIS, AI was 26% more effective in detecting suspicious activity compared to traditional fraud defenses.

Furthermore, AI analytics helped financial institutions uncover 12% more fraudulent accounts than they would have otherwise identified.

A Powerful Evolution

The potency of AI in fraud protection was highlighted by separate data from FIS, showing that 78% of respondents reported that artificial intelligence has
improved their company’s fraud detection and risk management strategies.

Nearly half of the business and tech leaders surveyed said they plan to increase their investment in AI over the next two years, with many indicating they intend to delegate more complex tasks to it.

One of the most significant evolutions of artificial intelligence is agentic AI, where AI agents can handle many tasks autonomously. While these AI agents have the potential to be a robust tool against fraud, many experts increasingly view them
as a double-edged sword.

According to research from SailPoint, 96% of tech professionals consider AI agents a growing security threat. However, nearly all respondents said they plan to expand their use of agentic AI in the coming year.

A Supplement, Not a Solution

As organizations integrate AI into their processes, cybercriminals have already deployed both generative and agentic AI at scale, using them for various fraud efforts including deepfakes and ransomware attacks. One reason for the significant advantage gained by cybercriminals is that they are not constrained by concerns over privacy or reputation.

Despite Project Hertha’s findings, there remains a risk that AI models could make mistakes—either missing instances of fraud or generating false positives.

These limitations led BIS to conclude that AI tools should be seen as supplements to existing fraud defenses and not complete solutions. Organizations cannot rely solely on AI and will need to
think outside the box
and innovate new approaches to keep pace with cybercriminals who have a substantial head start.

Latest Posts