One of the primary concerns with faster payments is the heightened risk of fraud, but artificial intelligence could offer solutions to these challenges.
A study by the Bank for International Settlements (BIS) and the Bank of England evaluated AI’s capability in detecting sophisticated fraudulent activities
The research utilized data from millions of bank accounts and transactions in a simulated environment similar to real-time retail payments, known as Project Hertha.
According to BIS, AI models demonstrated superior performance in fraud detection, with a 26% greater effectiveness than traditional methods in identifying suspicious activities.
Moreover, AI analytics contributed to the identification of an additional 12% of fraudulent accounts that would not have been detected otherwise.
A Significant Evolution
The power of AI in fraud protection is further highlighted by data from FIS, which indicates that 78% of respondents reported improved fraud detection and risk management strategies due to the adoption of artificial intelligence. Nearly half of the surveyed business and tech leaders expressed plans to increase their investment in AI over the next two years, with many indicating a desire to delegate more complex tasks to it.
The advancement of agentic AI, where AI agents can handle numerous tasks autonomously, is particularly significant. However, this technology is viewed as a double-edged sword by many experts who recognize its potential benefits but also the associated risks. According to research from SailPoint, 96% of tech professionals consider agentic AI a growing security threat, yet nearly all respondents intend to expand their use of such agents in the coming year.
A Complementary Tool
As organizations move towards integrating AI into their operations, cybercriminals have already begun utilizing both generative and agentic AI on a large scale. These tools are employed for various fraudulent activities ranging from deepfakes to ransomware attacks. The primary advantage of cybercriminals is that they face no constraints related to privacy or reputation.
While Project Hertha provides evidence that AI can be an effective tool, the potential for error—whether missing instances of fraud or generating false positives—remains a concern. Therefore, BIS advises that AI tools should be viewed as supplementary rather than complete solutions in combating fraud. Organizations must continue to innovate and adopt new strategies to remain competitive with cybercriminals who have already gained a significant early advantage.