AI-Powered Deepfake Scams Wreak Havoc on Businesses
2024-9-10 17:19:7 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

More than half (53%) of businesses in the U.S. and UK have been targeted by financial scams using deepfake technology, with 43% of those companies falling victim, according to a Medius survey of 1,533 finance professionals.

The results indicated the growing threat posed by deepfakes — AI-generated images, videos and audio that are convincingly fraudulent — and highlighted the increasing sophistication of cybercriminals leveraging AI to exploit vulnerabilities in business operations.

According to the survey results, 85% of respondents consider these scams an existential threat to their organization’s financial security.

Stephen Kowski, field CTO at SlashNext Email Security+, explained emerging trends include real-time voice cloning in vishing attacks and the integration of deepfakes into multi-channel phishing campaigns.

“We’re also seeing an increase in hybrid attacks that combine deepfakes with other social engineering techniques,” he said.

Claroty

Invest in Threat Intelligence

To stay ahead, Kowski said companies should invest in threat intelligence platforms that provide early warnings and actionable insights on evolving deepfake tactics.

He added organizations can improve employee awareness through regular, interactive training sessions that showcase real-world examples of deepfake scams.

“Implementing simulated phishing and social engineering exercises that include deepfake elements can help employees recognize subtle signs of manipulation,” he explained.

He noted continuous education on emerging AI-powered threats and fostering a culture of healthy skepticism are essential for building resilience against these attacks.

MFA, Zero-Trust Essential Security Practices

Darren Guccione, CEO and co-founder at Keeper Security, said the rapid evolution of AI-powered threats like deepfakes highlights the urgent need for companies to update their cybersecurity practices.

According to Keeper Security’s recent report, 84% of IT leaders globally recognize that phishing and smishing have become harder to detect due to AI-powered tools.

“This underscores the importance of comprehensive, ongoing employee training tailored to identifying deepfakes and other AI-driven attacks,” Guccione said.

He said he agreed regular simulations and updates on emerging threats are essential to help employees recognize and mitigate these risks effectively.

“To effectively assess their vulnerabilities to deepfake scams, businesses should implement stringent security protocols, starting with a zero-trust architecture,” he added.

This approach ensures that no user or system is inherently trusted, requiring continuous verification of every interaction across the network.

“In addition, businesses must invest in advanced cybersecurity technologies, such as MFA, advanced encryption and real-time threat detection systems,” Guccione said.

Regular security audits, penetration testing, and continuous monitoring are essential for identifying potential weaknesses and ensuring that defenses remain strong against evolving threats like deepfakes.

From the perspective of Nicole Carignan, vice president of strategic cyber AI at Darktrace, the ability for attackers to use generative AI to produce deepfake audio, imagery and video is on the rise.

“Attackers are increasingly using deepfakes to start sophisticated social engineering attacks,” she cautioned.

She added that while the use of AI for deepfake generation is now very real, the risk of image and media manipulation is not new.

The challenge now is that AI can be used to speed up the production of attacks at a higher quality, increasing attackers’ ability and effectiveness of targeted campaigns.

“Since the sophistication of deep fakes is getting harder to detect, it is imperative to turn to AI-augmented tools for detection as humans alone cannot be the last line of defense,” Carignan said.

Nick France, CTO at Sectigo, explained perfectly written phishing emails, audio messages with the correct tone, and now even fully fake video can be created easily and used to socially engineer into companies and steal money or valuable data and intellectual property.

“Employees may still assume today that live audio or video cannot be faked, and act on requests they are given seemingly by colleagues or leaders without question,” he said.

He recommended security teams see deepfakes as another threat to their organizations and update their practices and training accordingly.

“AI technology can be valuable on the defensive security side, with AI tools that can detect these deepfakes and alert security teams before damage is done,” France added.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/09/ai-powered-deepfake-scams-wreak-havoc-on-businesses/
如有侵权请联系:admin#unsafe.sh