One of the primary concerns regarding faster payments is the heightened risk of fraud, which can be mitigated through the use of artificial intelligence (AI).
A study conducted by the Bank for International Settlements (BIS) and the Bank of England analyzed AI’s capability in detecting sophisticated fraudulent activities committed by cybercriminals.
The experiments were performed using simulated data from millions of bank accounts and transactions, aiming to reflect real-time retail payments scenarios.
Dubbed Project Hertha, the study revealed that AI models are effective fraud detection tools. According to BIS, these models showed a 26% increase in detecting suspicious activities compared to traditional fraud defenses.
Furthermore, AI analytics aided financial institutions in identifying an additional 12% of fraudulent accounts beyond what would have been detected otherwise.
Examination of the Impact
The robustness of AI in fraud protection was further highlighted by findings from FIS. Seventy-eight percent of respondents reported that artificial intelligence had improved their company’s fraud detection and risk management strategies.
Nearly half of business and tech leaders surveyed planned to increase their investment in AI over the next two years, with many indicating a willingness to delegate more complex tasks to it.
One notable evolution of AI is agentic AI, where AI agents can handle numerous tasks autonomously. While these AI agents present a potent tool for combating fraud, experts increasingly view them as a double-edged sword.
Research from SailPoint indicated that 96% of tech professionals considered AI agents a growing security threat. Despite this concern, nearly all respondents expected to expand their use of agentic AI in the coming year.
AI as a Supplement
As organizations move toward incorporating AI, cybercriminals have already begun using both generative and agentic AI at scale. These technologies are employed in various fraudulent activities, from creating deepfakes to ransomware attacks.
One of the significant advantages that cybercriminals enjoy is their lack of concerns about privacy or reputation issues.
While Project Hertha demonstrated AI’s potential as a powerful tool, its limitations—such as possibly missing fraud instances or generating false positives—require organizations to adopt a multifaceted approach.
BIS concluded that AI tools should be seen as supplements to existing fraud defenses rather than complete solutions. Organizations cannot fully rely on AI and must innovate new strategies to combat cybercriminals, who are gaining a substantial lead.