The UK’s financial services regulator has warned banks that it will be watching closely what steps they put in place to mitigate the growing risk of AI-powered fraud.
Financial Conduct Authority (FCA) CEO, Nikhil Rathi, said in a speech delivered at the offices of The Economist this week, “We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise.”
He cited the case of finance campaigner Martin Lewis, who last week warned his followers on social media about a viral deepfake video generated by AI technology in which a likeness of him was used to promote an investment fraud scheme.
“There are other risks too, involving cyber fraud, cyber-attacks and identity fraud increasing in scale and sophistication and effectiveness,” Rathi continued.
“This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time. We will take a robust line on this – full support for beneficial innovation alongside proportionate protections.”
Read more about cyber challenges in financial services: Interview: Tackling Evolving Cyber Challenges in Financial Services
However, the FCA boss also acknowledged that AI technology itself could be used to tackle crime including fraud and money laundering, “more quickly and accurately and at scale.”
His speech was prompted by new calls from Prime Minister, Rishi Sunak, for the UK to become a global hub for AI regulation.
Suid Adeyanju, CEO of RiverSafe, argued that clear communication from the FCA about what it expects from financial services companies will be crucial in the months and years ahead.
“The tidal wave of AI-enabled cyber-attacks and online scams adds a greater level of complexity, so it’s vital that financial services firms beef up their cyber credentials and capabilities to identify and neutralize these threats before they can get a foothold,” he added.
Daniel Mcloughlin, field CTO at OneSpan, argued that organizations are on the back foot because many of the security tools they use aren’t designed to tackle advanced AI threats.
“Traditionally, security has been focused on securing end-to-end processes, but this needs to change. Security instead needs to be focused on securing and authenticating the actual interactions that occur digitally between, and among, people and companies,” he claimed.
“This risk-based approach to secure the interaction, by balancing the level of security required with the user experience, will be key in protecting us all.”
Credit: Source link
Comments are closed.