Despite 80% of IT leaders expressing confidence that their organization won’t fall for phishing attacks, nearly two-thirds admitted they’ve clicked on phishing links themselves.
This overconfidence is coupled with concerning behaviors, as 36% of IT leaders have disabled security measures on their systems, undermining organizational defenses.
These were among the chief results of an Arctic Wolf and Sapio Research survey of more than 1,500 senior IT and security decision-makers and end-users.
The report also revealed the consequences for security failures are steep, with 27% of IT leaders having seen employees terminated for falling victim to scams.
However, poor security hygiene persists, with 68% of IT and cybersecurity leaders acknowledging they reuse system passwords, a significant vulnerability in today’s threat landscape.
Mika Aalto, co-founder and CEO at Hoxhunt, explained learning occurs at the boundaries of knowledge, so it’s important that IT and business leaders are challenged with training that’s automatically optimized to what’s called the zone of proximal development.
“The targeted, personalized training needs to be based on threat reporting behavior, because that’s the ideal result of a phishing attack,” he said.
If behavior-based training feels too easy, they’ll lose interest, and if it’s too hard or too negative, they’ll actively avoid participating.
“Your phishing training program is the best opportunity to build a strong security from the top down based on psychological safety and transparency,” he said.
Aalto suggested institutionalizing threat reporting as a core cultural value to the company with incentives, financial or otherwise, and a detection and response platform that is fueled by human threat intelligence.
“The financial impact of catching and neutralizing phishing attacks before their damage can spread is a clear business outcome that can transform security culture,” he said.
Stephen Kowski, field CTO at SlashNext Email Security+, said to bridge the gap between IT leaders’ confidence and end users’ comfort, open communication channels are key.
“Regular feedback sessions, anonymous reporting options, and clear, jargon-free security communications can help demystify cybersecurity for non-technical staff,” he said.
Recognizing and rewarding employees who report potential threats can further encourage a proactive security culture.
“Creating a culture where employees feel comfortable reporting suspicious messages without fear of repercussion is crucial for maintaining a strong defense against evolving threats,” Kowski added.
Adam Marrè, CISO of Arctic Wolf, cautioned there is a line between effectiveness and tediousness in security, and it is on IT and business leaders to find it for their own organization.
“The best security hygiene around passwords and credentials will come from measures that have as few workarounds as possible,” he explained.
That means using a password manager, multi-factor authentication, or VPN usage for all employees, even if some deem those measures inconvenient.
“Cyber risk is business risk, so taking an extra minute to log in to an internal system is a relatively small price to pay for greater protection,” Marrè added.
The survey also highlights the slow adoption of AI policies, with only 60% of IT leaders reporting their organization has such a policy in place.
However, just 29% of end users are even aware of these policies, indicating a communication gap that could expose organizations to further risks.
“Given that AI is continuing to change and evolve, meaning that our policies and regulations surrounding it as a tool need to be constantly updated,” Marrè said.
He noted those constant updates can be difficult to communicate as the changes are taking place at an exponential rate.
However, having a company page that can be updated regularly or email updates that are disseminated in near-real time to the policy change could be effective avenues for keeping everyone in the loop.
“Another piece of this is having IT teams evaluate what AI tools are necessary for the company to permit versus which ones to ban in order to limit the risk of an AI-related incident,” Marrè added.
Recent Articles By Author