Concerns About Faster Payments and Fraud Prevention
One of the primary worries associated with faster payments is the risk of increased fraud, but artificial intelligence could offer a solution to these concerns.
A study conducted by the Bank for International Settlements (BIS) in collaboration with the Bank of England examined AI’s effectiveness in detecting sophisticated fraudulent activities carried out by cybercriminals.
The research, based on a simulated environment using data from millions of bank accounts and transactions, aimed to mimic real-time retail payment scenarios.
According to the findings of Project Hertha, AI models demonstrated significant value in fraud detection. BIS reported that AI was 26% more effective than traditional methods at identifying suspicious activities.
Additionally, AI analytics helped financial institutions discover 12% more fraudulent accounts than they would have without this technology.
A Powerful Evolution
The potential of AI in fraud protection was further highlighted by data from FIS, which indicated that 78% of respondents noted an improvement in their company’s fraud detection and risk management strategies due to artificial intelligence.
Nearly half of the surveyed business and tech leaders expressed a willingness to invest more in AI over the next two years, with many planning to delegate complex tasks to it.
A significant evolution in AI is agentic AI, where autonomous AI agents can perform various tasks independently. Although these agents present a formidable defense against fraud, they are often seen as a double-edged sword by experts.
Research from SailPoint found that 96% of tech professionals view AI agents as growing security threats, yet nearly all respondents intended to expand their use of agentic AI in the coming year.
A Supplement, Not a Solution
As organizations increasingly integrate AI into their systems, cybercriminals have already begun utilizing both generative and agentic AI on a large scale. They employ these technologies for various fraudulent activities, including deepfakes and ransomware attacks.
One of the primary reasons cybercriminals are gaining such an advantage is that they do not face concerns related to privacy or reputation.
While Project Hertha demonstrated AI’s potential as a powerful tool, it also revealed limitations. There is a risk that AI models might miss instances of fraud or generate false positives.
Given these limitations, BIS concluded that AI tools should be seen as supplements rather than complete solutions to fraud defense strategies. Organizations cannot fully rely on AI and must innovate new approaches to stay ahead of cybercriminals who have a significant head start.