IBM: ChatGPT-Generated Can Write Convincing Phishing Emails
2023-10-25 02:49:37 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

OpenAI’s widely popular ChatGPT can write phishing emails that are almost as convincing as those created by humans and can write them exponentially faster, according to research from IBM that is sure to ramp up corporate worries about generative AI chatbots.

Big Blue’s X-Force security team ran an A/B experiment with a healthcare company in which half of the 1,600 employees received a phishing message created by humans and the other half a phish generated using ChatGPT. Researchers found that the employees were slightly more likely to click on the human-created phishing email and to report that AI-generated message to company executives as suspicious.

However, it only took the security researchers five minutes – and five prompts – to create a highly convincing phishing email with ChatGPT, something that typically takes the X-Force team about 16 hours. That would save bad actors using generative AI models about two days of work, according to Stephanie Carruthers, global head of innovation and delivery for X-Force and chief people hacker for X-Force Red.

And while humans may have – narrowly – won this round, the future is less certain and corporations need to adapt their security practices, Carruthers wrote in a report about the experiment.

“AI is constantly improving,” she wrote. “As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. … While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.”

AWS Builder Community Hub

The Dark Side of Generative AI

ChatGPT famously took the industry by storm when OpenAI released it in late November 2022, making it at the time the fastest-growing web app every and convincing other IT vendors to accelerate the development of their own generative AI-based tools, like Google’s Bard.

Enterprises quickly began to embrace the large-language models (LLMs), examining where they could be used to make their businesses faster and more efficient.

They also voiced security concerns, including worrying that developers could accidentally leak privileged information by using ChatGPT in their coding efforts. In addition, cybersecurity vendors saw that threat groups were experimenting with ChatGPT about as aggressively as corporations and creating their own generative AI chatbots, like WormGTP and FraudGPT.

The concerns reached all the way up to C-level executives and boards of directors.

In a blog post in June, Jade Hill, director of content and communications at Abnormal Security, wrote about the promise of ChatGPT, but cautioned that “generative AI can also be weaponized by cybercriminals for phishing attacks and social engineering scams.”

OpenAI and similar generative AI companies are putting safeguards in place to make it more difficult to use their tools for malicious activities, but they’re not foolproof. IBM showed that.

“Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors,” Carruthers wrote.

Creating AI-Based Phishing Emails

The X-Force researchers had ChatGPT prioritize the industry employee concerns and then prompted the chatbot to strategically select the use of both social engineering and marketing techniques in the email, with the choices designed to make it more likely an employee would open it. They had ChatGPT choose who the sender should be, such as someone in the company or a vendor.

The LLM model then created a phishing email for the healthcare industry – a key target of such attacks – pitching job advancement possibilities from an internal HR manager.

“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive,” she wrote, adding that two of the three organizations that initially were participating in the experiment backed out after reviewing the phishing emails created by both ChatGPT and humans, worried about a high success rate.

In the end, the research found that 14% of employees clicked on the phishing emails created by humans, while 11% clicked on the message generated by ChatGPT. Carruthers pointed to three factors: emotional intelligence, personalization, and a short and succinct subject line.

“Humans understand emotions in ways that AI can only dream of,” she wrote. “We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link.”

She also noted that the human-writing phishing email also included not only the recipient’s name in the introduction, but also included a reference to a legitimate organization. In addition, the subject line of the human-written email was “Employee Wellness Survey,” while that of the AI-generated phish was “Unlock your Future: Limited Advancements at Company X.”

The “extremely lengthy subject lines … potentially [caused] suspicion even before employees opened the mail,” Carruthers wrote.

In fact, 59% of people reported the AI-created phishing email to security, while 52% did so for the human-written phish.

Caution is Needed

Such statistics shouldn’t give enterprises a false sense of security, she added, noting that while so far X-Force hasn’t seen a wide use of generative AI in current campaigns, tools like WormGTP are for sale on the dark web, an indication that attackers are testing the use of AI for phishing attacks.

“While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks,” Carruthers wrote.

Given that, organizations and their employees need to step up their protections against phishing campaigns, including contact then sender when in doubt of a message’s legitimacy, revamping social engineering training programs to include such threats as vishing, strengthening identity and access management controls, and continuously adapt and innovate around security to match the rapid evolution of threat groups.

In addition, organizations needs to dispel the idea that phishing emails – many that are written by cybercriminals whose primary language isn’t English – are replete with bad grammar and spelling errors. The use of generative AI chatbots means the language in phishing emails written by such tools will be more grammatically correct.

“Instead, we should train [employees] to be vigilant about the length and complexity of email content. Longer emails, often a hallmark of AI-generated text, can be a warning sign,” she wrote.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/10/ibm-chatgpt-generated-can-write-convincing-phishing-emails/
如有侵权请联系:admin#unsafe.sh