Protecting Against FraudGPT
2023-10-31 21:0:25 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

In the last year, the world has been both amazed and shocked in equal measure following the release of ChatGPT.

Suddenly a platform is available that has more in-depth knowledge of the internet than any other technology that preceded it. The AI can be trained to rap like Eminem, write in the style of world-famous poets and translate content with such accuracy that it appears to be fluent in every global dialect. The release of such a powerful tool has opened up a world of opportunities for its users.

However, as history has proven, when a technology captures the attention of the law-abiding world, it also captures the attention of those with malicious intent, and this has spurred the launch of the first piece of generative AI designed purely for malicious purposes–FraudGPT.

FraudGPT

FraudGPT first reached the dark web in July 2023, designed specifically to support the cybercrime community. Unlike ChatGPT, FraudGPT is trained only on malicious content, so it is an expert in the world of cybercrime and hacking. The AI is currently being advertised on the dark web on a subscription model of either $200 per month or $2,000 annually, offering its users an array of services, including writing malicious code, finding vulnerabilities in systems, creating phishing emails, creating phishing pages plus helping criminals find the most hacked/spoofed websites.

From an enterprise standpoint, FraudGPT is every CEO’s worst nightmare because it provides attackers with a ready-made tool to create highly realistic phishing scams.

Phishing Emails

One of the biggest hurdles most criminals encounter when crafting phishing emails is how realistic they are. There are often spelling mistakes, or imagery or the brand colors are wrong, making them much easier to recognize as fake. But with FraudGPT, all these issues are removed.

DevOps Unbound Podcast

The platform is intelligent and attackers can prompt it to learn about organizations and then create mirror copies of communications from the organization or even spoof its website. With these capabilities at attackers’ fingertips, this makes FraudGPT a very dangerous tool.

Unlike traditional phishing scams, spoofed websites and scam emails would be created with such accuracy that they would be almost impossible to detect as fake. There will be no typos, the font will be accurate and the communication will follow a similar style to previous mailers. Without these typical red flags, targeted victims will trust the emails or fake sites, seeing no harm in handing over the information being requested.

So, what can businesses do to protect against this new trend in AI-generated phishing?

As with all phishing scams, attacks generated via FraudGPT are a method used to secure more information. Criminals will use FraudGPT to create realistic emails or spoofed websites, but the end goal is to secure something of value from the victim, and in most cases, it’s their corporate credentials and passwords.

Criminals understand that stealing one valid set of employee credentials puts them on a pathway to execute a data breach or launch a ransomware attack, so tricking an employee into handing them over is almost always their number one objective.

Make it Harder for Criminals

This means to improve corporate defenses against AI-generated phishing scams, organisations must make it harder for criminals to reach and steal employee credentials.

One of the best ways to improve security is by removing credentials from the hands of employees by using single sign-on (SSO) solutions and enterprise password managers (EPM). These security products remove passwords from the hands of the workforce, enabling them to access all the applications they need to perform their roles but without the need to ever see, know or enter passwords. This means it’s impossible for employees to be tricked into handing passwords over by phishers – even when they are targeted with highly realistic scams.

As FraudGPT threats continue to arise, it is essential that organizations take steps to protect their users. The primary goal for phishing is typically to steal passwords from employees, so the safest way to remediate this threat is to remove passwords from the hands of the workforce, eliminating the phishing risk altogether.

This means even when sophisticated FraudGPT phishing scams do reach employee inboxes, they don’t have the ability to hand over their passwords because they simply don’t know them.


文章来源: https://securityboulevard.com/2023/10/protecting-against-fraudgpt/
如有侵权请联系:admin#unsafe.sh