A crypto phishing campaign has been identified in which a threat actor employs AI-generated content to create 17,000 phishing lure sites impersonating more than 30 major cryptocurrency brands, including Coinbase, Crypto.com, Metamask and Trezor.
By compromising login credentials and two-factor authentication (2FA) codes, attackers can gain unauthorized access to users’ crypto accounts, leading to potential financial losses and further exploitation.
The IT security firm Netcraft monitored the campaign for more than a year, noting the threat posed by this campaign is multifaceted.
The hackers were able to obtain seed recovery phrases, which allows attackers to completely take over victims’ wallets, rendering them irrecoverable.
The attack flow starts with AI-generated content on lure sites hosted on an otherwise legitimate developer platform GitBook.
Links to these initial lure sites are often distributed via website comments and a large proportion did not contain malicious content.
The sites include call-to-action (CTA) links that redirect users to phishing domains. These domains use UUIDs (universally unique identifiers) to track user visits.
The sites are registered with access keys and hosted on Amazon Web Services (AWS), ensuring reliable uptime and performance.
The report noted the use of AI to generate realistic and convincing phishing content enables attackers to create a vast number of phishing sites quickly and efficiently.
This content mimics legitimate crypto brand websites, making it difficult for users to distinguish between real and fake sites.
The second layer, hidden using cloaking, uses a network of redirects via traffic distribution systems that eventually lead to cryptocurrency-themed phishing websites.
These are designed to harvest wallet credentials, seed phrases and two-factor codes.
“Many of the lures we tracked used generative AI to create unique content for thousands of pages that span a wide range of impersonated brands,” said Robert Duncan, vice president of product strategy at Netcraft.
He explained using large language models (LLMs) to create this content is simple and cheap to automate and is faster and better than a human could achieve for even a fraction of the volume.
“For both individuals and enterprise networks, using trusted tools powered by leading threat intelligence to protect end users is critical,” Duncan said.
The report also noted examples where the LLM-generated content produced erroneous artifacts polluting the output of the final text, which do not appear to have been caught by the threat actor and suggest high levels of automation to generate these lures.
“One LLM output even included a warning about the risks of phishing attacks,” the report said.
Duncan said as always, it’s important to be cautious of unexpected links when browsing along with other precautions for cryptocurrency users of hardware wallets and hot or cold wallet separation.
For brands and enterprises impersonated in phishing campaigns like this, using dedicated, scalable and automated cybercrime detection and countermeasures services gives early visibility of attacks.
“As we discovered, blocked and took down this campaign, we saw changing attacker behavior,” Duncan said. “Under pressure, the lure sites moved to webflow.io with less sophisticated lures.”
In previous research, Netcraft observed over $45 million in cryptocurrency payments transferred to scammers hidden in peer-to-peer messaging platform scams.
Duncan said this attack follows a recent trend of crypto industry threats; from crypto drainers, pig butchering and fake investment platforms to the crypto donation scams exploiting the Trump 2024 election campaign and YouTube channel hijacking.
From his perspective, increasing use of generative AI by criminals is inevitable and brings with it a reduced barrier to entry for threat actors.
Although tools to scale and automate attacks are nothing new, free-to-use LLMs take things to the next level, making convincing attacks trivial to create without requiring in-depth web development or internet infrastructure knowledge.
“Expect to see more from where this came from,” Duncan cautioned.
Recent Articles By Author