While the use of generative AI by hackers is making cyberthreats more frequent, more sophisticated, and more difficult to detect, and many enterprises say they are unprepared for them, according to cybersecurity firm Keeper Security.
However, while organizations will need to improve their defenses, many fundamental cybersecurity practices they already have in place remain foundational, even as they battle against emerging AI threats, according to the company, which specializes in software designed to protect passwords, secrets, and remote connections.
According to a report released by Keeper Security this week, 95% of IT and cybersecurity professionals surveyed said that cyberattacks are more sophisticated than ever – with the top two most seriously emerging threats being AI-powered attacks and deepfakes – and that they aren’t ready to push back against them.
The rising tide of these more complex attacks are taking a toll, according to the report, which is based on a survey of more than 800 global IT and cybersecurity executives. About 92% of respondents said they’re seeing a year-over-year increase in cyberattacks by what Keeper called “creative and relentless” cybercriminals. The attacks also come at a cost: 73% percent of those surveyed said they’d been hit with an attack to caused them to lose money.
Within enterprises, the top target of attacks are IT services groups, at 58%, followed by financial operations, supply chain management, data analysis and reporting, and R&D. Companies in the manufacturing and hospitality and travel industries are among the top most hacked, with attacks happening weekly, while attacks in financial services occur monthly.
All three industries reported attacks using malware among the most common, with those in hospitality also noting ransomware. The other two industries say phishing also is common.
Unsurprisingly, enterprises are seeing AI playing a significant role in phishing attacks, something that organizations and cybersecurity vendors have been seeing since soon after OpenAI released ChatGPT almost two years ago. Using generative AI tools, threat actors can smooth out the glitches in their phishing emails – such as poor grammar and syntax, awkward phrasing, and typos – that served as red flags to individuals receiving them.
About 84% of those surveyed by Keeper said phishing and smishing – text-based phishing – have become more difficult to detect due to hackers’ use of AI tools, and 42% said AI-powered phishing attacks are their top AI security concern.
In addition, 51% said they are seeing an increase in phishing attacks. Other attacks on the rise include malware (49% noted this), ransomware (44%), and password attacks (31%).
“Today, an overwhelming 67% of companies struggle to combat phishing attacks,” the report’s authors wrote. “The explosion in AI tools has intensified this problem by increasing the believability of phishing scams and enabling cybercriminals to deploy them at scale.”
In this environment, organizations are turning to AI tools to help protect them. According to market research firm Statista, the global AI cybersecurity market was at $24.3 billion last year but is expected to shoot up to $133.8 billion by 2030.
“Firms should use AI and ML [machine learning] at the forefront to mitigate attacks,” Harry Keir Hughes, principal consultant at Infosys Knowledge Institute, wrote in a blog post earlier this year. “For example, cybersecurity professionals look at logs and events – all incidents that happen on a day-in, day-out basis – to triage the most dangerous events for the firm. With AI, they can identify significant logs and act.”
He added that “AI can automate threat hunting and enhance security execution, including threat and malware detection, vulnerability detection, patch deployment, and security countermeasures and controls.”
At the same time, Keeper Security researchers said organizations also should continue leaning on cybersecurity practices they already have in place, such as data encryption to protect sensitive information from unauthorized access. According to the survey, 51% of respondents plan to increase data protection to push back against AI threats.
In addition, 45% said they planned to ramp up employee training and awareness, which for years has been a key tool in combatting phishing attacks by teaching workers how to spot both phishing and smishing attempts. Companies will have to hone the training to deal with modern AI-based bogus messages.
Forty-one percent said they will invest more in advanced threat detection systems to give them early warnings systems to AI-driven threats.
Recent Articles By Author