DeFi hacks are worse than ever, Chainalysis and PeckShield report | #malware | #ransomware | #education | #technology | #infosec

[ad_1]

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Some of these groups have so much cash — or bitcoin, rather — that they could now potentially compete with legit security firms for talent in AI and machine learning, according to Hyppönen, the chief research officer at cybersecurity firm WithSecure.

Ransomware gang Conti pulled in $182 million in ransom payments during 2021, according to blockchain data platform Chainalysis. Leaks of Conti’s chats suggest that the group may have invested some of its take in pricey “zero day” vulnerabilities and the hiring of penetration testers.

“We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns,” Hyppönen told Protocol.

“It’s not a far reach to see that they will have the capability to offer double or triple salaries to AI/ML experts in exchange for them to go to the dark side,” he said. “I do think it’s going to happen in the near future — if I would have to guess, in the next 12 to 24 months.”

If this happens, Hyppönen said, “it would be one of the biggest challenges we’re likely to face in the near future.”

AI for scaling up ransomware

While doom-and-gloom cybersecurity predictions are abundant, with two decades of experience on matters of cybercrime, Hyppönen is not just any prognosticator. He has been with his current company, which until recently was known as F-Secure, since 1991 and has been researching — and vying with — cybercriminals since the early days of the concept.

In his view, the introduction of AI and machine learning to the attacker side would be a distinct change of the game. He’s not alone in thinking so.

When it comes to ransomware, for instance, automating large portions of the process could mean an even greater acceleration in attacks, said Mark Driver, a research vice president at Gartner.

Currently, ransomware attacks are often very tailored to the individual target, making the attacks more difficult to scale, Driver said. Even still, the number of ransomware attacks doubled year-over-year in 2021, SonicWall has reported — and ransomware has been getting more successful as well. The percentage of affected organizations that agreed to pay a ransom shot up to 58% in 2021, from 34% the year before, Proofpoint has reported.

However, if attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals.

“It’s not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely,” Driver said. Ultimately, “it’s terrifying.”

The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that’s because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past.

The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti’s $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.

Hyppönen estimated that less than a dozen ransomware groups might have the capacity to invest in hiring AI talent in the next few years, primarily gangs affiliated with Russia.

‘We would definitely not miss it’

If cybercrime groups hire AI talent with some of their windfall, Hyppönen believes the first thing they’ll do is automate the most manually intensive parts of a ransomware campaign. TThe actual execution of a ransomware attack remains difficult, he said.

“How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? How do you keep changing the operation, dynamically, to actually make sure you’re successful?” Hyppönen said. “All of that is manual.”

Monitoring systems, changing the malware code, recompiling it and registering new domain names to avoid defenses — things it takes humans a long time to do — would all be fairly simple to do with automation. “All of this is done in an instant by machines,” Hyppönen said.

That means it should be very obvious when AI-powered automation comes to ransomware, according to Hyppönen.

“This would be such a big shift, such a big change,” he said. “We would definitely not miss it.”

But would the ransomware groups really decide to go to all this trouble? Allie Mellen, an analyst at Forrester, said she’s not as sure. Given how successful ransomware groups are already, Mellen said it’s unclear why they would bother to take this route.

“They’re having no problem with the approaches that they’re taking right now,” she said. “If it ain’t broke, don’t fix it.”

Others see a higher likelihood of AI playing a role in attacks such as ransomware. Like defenders, ransomware gangs clearly have a penchant for evolving their techniques to try to stay ahead of the other side, said Ed Bowen, managing director for the AI Center of Excellence at Deloitte.

“I’m expecting it — I expect them to be using AI to improve their ability to get at this infrastructure,” Bowen said. “I think that’s inevitable.”

Lower barrier to entry

While AI talent is in extremely short supply right now, that will start to change in coming years as a wave of people graduate from university and research programs in the field, Bowen noted.

The barriers to entry in the AI field are also going lower as tools become more accessible to users, Hyppönen said.

“Today, all security companies rely heavily on machine learning — so we know exactly how hard it is to hire experts in this field. Especially people who have expertise both in cybersecurity and in machine learning. So these are hard people to recruit,” he told Protocol. “However, it’s becoming easier to become an expert, especially if you don’t need to be a world-class expert.”

That dynamic could increase the pool of candidates for cybercrime organizations who are, simultaneously, richer and “more powerful than ever before,” Hyppönen said.

Should this future come to pass, it will have massive implications for cyber defenders, in the event that a greater volume of attacks — and attacks against a broader range of targets — will be the result.

Among other things, this would likely mean that the security industry would itself be looking to compete harder than ever for AI talent, if only to try to stay ahead of automated ransomware and other AI-powered threats.

Between attackers and defenders, “you’re always leapfrogging each other” on technical capabilities, Driver said. “It’s a war of trying to get ahead of the other side.”



[ad_2]

Source link