I am your bank. Scams are a natural evolution of the criminal Modus Operandi: as financial institutions deploy more protective technology, the criminals respond by going low-tech. Social Engineering replaces Exploit Kits. Impersonation scams are no longer endemic to one region and cost society financially and psychologically while eroding trust in institutions. In the UK, impersonation scams were the second most costly in 2023, costing UK consumers and businesses £72m in H1. In Norway, DNB reported earlier this year that safe account fraud, another term for an impersonation scam, was the fastest-growing fraud type in the region. In Australia, Aussies lost a staggering $92 million to impersonation scams last year, accounting for more than 70% of reports made to Scamwatch. With PSD3 soon coming into effect, European financial institutions will be required to reimburse customers in cases where the bank has been spoofed. The article below considers a five-layered approach to turning the tide on impersonation scams. These scams move geographically, like a bulge in a carpet. Stomp the bulge down, and it will pop up somewhere else, often in unexpected geographical locations. Although no two fraud events are exactly alike, impersonation scams typically follow the following steps: To pre-empt these low-tech scams, there are 5 layers of defence available:
Identifying when a user is on a call while in a banking session is essential to detecting vishing (voice phishing). ThreatFabric and industry data show that users are only on live calls during 1% of genuine banking sessions. However, alerting on all these sessions would introduce unnecessary noise and therefore additional data is required to separate the signal from the noise. Two additional signals deliver high predictive power when detecting vishing: Detecting the presence of a remote access tool (RAT) accurately requires a multi-layered approach. Many commercial RATs (e.g. AnyDesk and TeamViewer) open network ports and therefore can be detected using port scanning technology. However, several ThreatFabric customers report a wave of RAT fraud originating from RATs such as Microsoft Quick Assist which do not open network ports. These RATs can be detected using sophisticated device indicators, which monitor for anomalies in screen size, resolution, and colour depth as well as behavioural indicators which monitor for frequent mouse overshoots, changes in hit zones and a variety of other behaviours that are specific to RATs. A ThreatFabric customer recently reported that fraudsters were using developer tools to inject content onto banking websites and convince users that they were truly on the phone with the bank. ThreatFabric has developed a fragment scanning tool to detect modifications like this – an additional risk signal that bank impersonation is in flight. In the case that a RAT is not used in the fraud, there are very few technical indicators to help identify the fraud early in the kill chain. In this case, behavioural biometrics technology which analyses user keystrokes, navigation, mouse movement, touch and rotation behaviour is paramount to detecting coercion or hesitancy. Statistically significant deviations from historical behaviour are a powerful indicator of social engineering, especially when correlated with device risks such as a live, VOIP call and/or live RAT. However, not all behavioural biometrics technology is built equal. Banks should take care in evaluating different providers by asking questions such as: Next-generation behavioural biometrics technology uses sophisticated feature engineering and deep learning techniques to minimise false positives, which cripple the return on investment of first-generation tools. The value of detecting early warning signals of impersonation fraud, through the layers above, is limited without a decisioning engine that can: Better technology will always only be part of the solution. If victims are truly convinced of the story they are being told, they will be coached into circumventing additional security measures and ignoring warning messages. Targeted fraud awareness campaigns alongside fraud education in schools and universities have the potential to significantly shift the scales in the fight against social engineering. Dr Nicola Harding’s study on the effectiveness of interventions on university campuses demonstrated this potential. Following a 45-minute educational video, student money mule recruitment dropped from 66% to 6%. The study tackled the inherent bias of survey data by simulating money mule recruitment via social media and measuring student responses in the wild. This shows the power of creative forms of prevention through education. Complex problems warrant complex, multi-layered solutions and impersonation fraud is no different. We can take motivation and learnings from countries, such as the Netherlands, who have been fighting back. ThreatFabric’s home market recently announced that bank impersonation fraud had fallen by 45% in 2023 due to the interventions by banks and law enforcement. With a multi-layered approach to technology, process design and consumer education, we can collectively turn the tide on impersonation fraud. Please get in touch with [email protected] if you would like to learn more about our market-leading device intelligence and behavioural biometrics solution, Fraud Risk Suite, which is helping banks detect a range of digital fraud, including impersonation scams.
I am your police.
I am your lover.The Nature and Scale of the Problem
A Typical Impersonation Scam
Multi-layered Defences Against Impersonation Scams
Layer 1: Call Intelligence
Layer 2: Remote access & screen-sharing detection
Layer 3: Behavioural biometrics
Layer 4: Real-time payment blocking
Layer 5: Consumer Education
Conclusion