As with many parts of the economy, digital technologies are playing an increasingly significant role in the financial services industry. This includes automation technologies such as artificial intelligence (AI) and machine learning (ML), enhancing the efficiency of banking operations and creating a more frictionless customer experience.
However, the shift to digital banking and financial transactions, particularly during COVID-19, has afforded cyber-criminals new opportunities to strike against a sector that was already a hugely tempting target. This has caused a growing convergence between financial crime and cybercrime in recent years. Infosecurity recently spoke to Martin Rehak, founder and CEO of Resistant AI and lecturer and senior researcher at Czech Technical University in Prague, to discuss the evolution of cyber-attacks targeting this sector and steps financial services organizations can take to mitigate these threats.
To what extent are we seeing financial crime and cybercrime converging in recent years?
Cybercrime is the biggest feeder into financial crime thanks to the technology element, which opens up vulnerabilities for exploitation. However, the approach has recently changed.
It used to be a much simpler scenario: a fraudster hacks into a PC or laptop, installs a tracker on the browser and when the user logs onto their online banking, the fraudster can access their bank account to launder or steal money. This example illustrates the convergence of financial crime and cybercrime at its most basic level – but it’s no longer the driving threat.
The critical problem organizations face today when it comes to cybercrime is essentially process hacking at scale. Rather than attacking the security systems, criminals are attacking the automation and AI systems – the processes – that companies rely upon to conduct business online. Fraudsters have organized into specialist roles that can industrially generate new forged documentation and identities, load them into the financial system – literally to the tune of thousands per hour – and then sell them to others who will commit the crime. No one needs to meet in person – all transactions are handled online.
This hacking of processes blurs the line between fraud, money laundering and cybercrime. Therefore, the teams traditionally responsible for fraud risk and compliance need to start thinking more like cybersecurity experts – start thinking like hackers – and sharing data and intelligence on the risk landscape, people, processes and technology to stop the enemy in their tracks.
Could you give examples of cyber-attacks combining cyber techniques, fraud and money laundering? Are these particularly difficult to detect and stop?
Let’s use insurance fraud as an example, where today someone can obtain 50 stolen IDs or easily create 50 fake ones digitally. The fraudster takes out policies in those 50 names, then falsifies accidents for those people and submits the claims to the insurance firm. Twenty-five of those claims might be rejected, but 25 might be successful, with the payout/proceeds going straight to the criminals. Even if the crime subsequently gets reported and investigated, authorities will be looking for people who don’t exist or investigating innocent people whose identities were stolen.
The industrial-scale of the problem becomes apparent when you discover that some of those identities were used by a completely different criminal to create accounts on a crypto exchange and launder the proceeds of a ransomware attack.
There is a huge increase in fraud being carried out in someone else’s name, at a scale the police are not equipped to deal with. This can only really be addressed by the financial crime-fighting teams inside banks and fintech organizations.
The difficulty in detecting and stopping these attacks is based primarily on the methods deployed. Traditional ones relying solely on human intervention are not quick enough to challenge digital fraud and prove too time-consuming. Trusting simple automation or rule-based AIs to catch up merely created the current frenzy of process hacking we have today. Even smarter AI systems that don’t look at the incoming customers holistically cannot stem the tide.
What’s needed is a kind of real-time identity forensics that can help confirm if a new customer is truly who they say they are and if an existing customer has been compromised and is no longer behaving as expected.
To what extent are the growth of AI and ML in financial services creating new fraud challenges and threats?
If nothing else, COVID-19 helped shine a spotlight on the vulnerabilities of today’s digital and mobile customer platforms that are capable of executing rapid and instant payment transactions, leaving little time to undertake customer authentication or transaction verification. Similarly, the difficulties of know your customer (KYC) and customer onboarding in the digital era are exposing financial services organizations – and the customers they serve – to a significantly increased risk of cybercrime and financial fraud.
The rapid expansion and automation of financial services to minimize customer friction has created new challenges regarding verification and risk management policies and practices. Evaluating if a digital interaction is authentic now depends on referencing a huge amount of data from multiple sources – everything from geolocation and session behaviors to data from merchants, bureaus and customer profiles.
In addition, today’s financial fraudsters are becoming experts at targeting these complex digital environments and are using innovations such as blockchain and instant payments against banks and their customers.
How should organizations be looking to tackle these kinds of threats?
The role of AI and ML is clear; it is the only scaling factor that supervises modern financial systems effectively in real-time. It brings together state-of-the-art document and customer behavior evaluation to uncover synthetic identities, account takeover attempts, money laundering and other emerging types of fraud plaguing financial services, many of which will have cybercrime origins. Using a system of continuously refined relationships between the algorithms, methods and capabilities, data is used to learn behavioral patterns associated with attacks, which means threats can be mitigated.
At the same time, it should be noted that the criminals themselves are also using AI and ML to support their activities, so it is a cat and mouse scenario that financial services have to constantly evolve to win.
Today’s AI-powered real-time identity forensics are capable of detecting advanced financial crime, fraud and manipulation, and are adept at joining the dots to uncover previously unidentified vulnerabilities in the underlying systems they protect so that future exploitation can be deterred.