AI Will Fuel Rise in Ransomware, UK Cyber Agency Says
2024-1-26 01:18:20 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

The accelerating innovation of generative AI will increase the risks of ransomware and other cyberthreats over the next two years as bad actors integrate the technologies into their nefarious operations, according to a report this week from the UK’s top cybersecurity agency.

The National Cyber Security Centre (NCSC) warned that the volume and impact of cyberattacks will increase as AI technologies evolve, will make such threats as phishing and other social engineering threats more effective and harder to detect, will make it easier for cybercriminals to build malware and target their victims, and will enable less-skilled hackers launch more damaging attacks.

And as other security agencies around the world and cybersecurity vendors have found, threat groups already are incorporating generative AI capabilities into their arsenals.

“Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding,” the agency wrote in the report, The Near-Term Impact of AI on the Cyber Threat. “This trend will almost certainly continue to 2025 and beyond.”

AI Makes Launching Attacks Easier

The study’s authors wrote that phishing, by delivering malware or stealing credentials, is key to bad actors getting initial access into targeted networks to launch ransomware and other attacks and that “it is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term.”

“Ransomware continues to be a national security threat,” James Babbage, director general for threats at the UK’s National Crime Agency, said in a statement. “The threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals. AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods.”

Babbage added that “fraud and child sexual abuse are also particularly likely to be affected.”

Warnings from Other Places

The NCSC report echoes the warnings from other organization. The U.S. Department of Homeland Security (DHS) in November 2023 called out the threat of AI in the wrong hands in its 2024 threat assessment.

“The proliferation of accessible artificial intelligence (AI) tools likely will bolster our adversaries’ tactics,” the report said. “Nation-states seeking to undermine trust in our government institutions, social cohesion, and democratic processes are using AI to create more believable mis-, dis-, and malinformation campaigns, while cyber actors use AI to develop new tools and accesses that allow them to compromise more victims and enable larger-scale, faster, efficient, and more evasive cyber attacks.”

Researchers at cybersecurity firm Malwarebytes noted that while generative AI will continue to enhance security tools, “AI can be used for good or malicious purposes. Threat actors can use some of the same AI tools designed to help humanity to commit fraud, scams, and other cybercrimes.”

The threats from generative AI and large-language models (LLMs) include everything from automated malware and risks to privacy to attacks that are larger, faster, and more complex and sophisticated, they wrote.

In its Cybersecurity Forecast 2024 report, Google also talked about the use of AI by threat actors as well as how it will be used by security pros to strengthen defenses.

Threat Groups Will Need to Adapt to AI

In its report, the NCSC took an expansive view of the multiple dangers that generative AI presents in the hands of bad actors. It will enhance their abilities in social engineer schemes, including smoothing out the translation, spelling, and grammatical mistakes that have made it easier to spot lure documents in phishing attacks.

“The time between release of security updates to fix newly identified vulnerabilities and threat actors exploiting unpatched software is already reducing,” the report said. “This has exacerbated the challenge for network managers to patch known vulnerabilities before they can be exploited. AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.”

That said, the impact between now and 2025 of technology on cyberattacks will depend on how quickly attackers can adapt to AI, from accumulating the data needed to train LLMs to developing the expertise to raising the funds.

AI will improve techniques used to develop malware, research vulnerabilities, and laterally move through comprised networks, though for now these tasks will rely on human expertise. AI can generate malware that can evade detection by security filters, but it needs to be trained on high-quality exploit data. There is a “realistic possibility” that nation-states have the caches of malware that are large enough to train AI models to do this, the authors wrote.

Data and Other Resources are Crucial

For now, quality data for training is key for effectively using AI in cyberattacks. It’s difficult to scale such jobs as automated reconnaissance of target, social engineering, and malware deployment, which all got back to data. However, as more data is successfully exfiltratied, the data that goes into AI models will improve, which will then improve cyberattack operations.

“Expertise, equipment, time and financial resourcing are currently crucial to harness more advanced uses of AI in cyber operations,” they wrote. “Only those who invest in AI, have the resources and expertise, and have access to quality data will benefit from its use in sophisticated cyber attacks to 2025. Highly capable state actors are almost certainly best placed amongst cyber threat actors to harness the potential of AI in advanced cyber operations.”

Other nation-states and established threat groups will have enough training data and resources to see some bounce in the next two years.

Another realistic possibility is that the factors that may limit these players now will become less important as more sophisticated generative AI models come onto the scene and their use increases. The report also points to publicly available AI models that eliminate the need for threat actors to create their own AI-based tools.

“Less-skilled cyber actors will almost certainly benefit from significant capability uplifts in this type of operation to 2025,” they authors wrote. “Commoditisation of cyber crime capability, for example ‘as-a-service’ business models, makes it almost certain that capable groups will monetise AI-enabled cyber tools, making improved capability available to anyone willing to pay.”

NCSC CEO Lindy Cameron urged organizations to strengthen their defenses against AI-powered attacks, pointing to the agency’s security advice and secure-by-design efforts to bolster their resilience.

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” Cameron said in a statement. “The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/01/ai-will-fuel-rise-in-ransomware-uk-cyber-agency-says/
如有侵权请联系:admin#unsafe.sh