IT security experts from the PSW GROUP inform how hackers use AI for cyber attacks and IT attacks. […]
AI is already part of everyday life: Science benefits from AI as much as healthcare, the automotive industry, marketing or the robotics sector. Cybersecurity has also recognized the benefits: With the help of artificial intelligence, new opportunities are developing to defend against cyber risks. In the fight against cyber threats, humans and AI are now forming a team.
“And that’s a good thing, because the threat situation changes almost every day,” says Patrycja Schrenk, Managing Director of the PSW GROUP. “Artificial intelligence has long since arrived on the side of cybercriminals. Just as AI plays a major role in detecting threats and defending against attacks, cybercriminals are also increasingly using AI as a weapon of attack.“
The IT security expert further explains: “Just as the learning algorithms are able to recognize behavior patterns in attacks and act specifically against them, criminals also use this artificial intelligence for penetration techniques as well as for the analysis and imitation of behavior in order to carry out their attacks more targeted, faster, more coordinated and above all more efficient.“
Although not every cybercriminal has the knowledge to program their own AI-based systems, third parties in the darknet now offer such systems as “AI-as-a-Service”. The team around Patrycja Schrenk has put together typical applications of criminals for artificial intelligence.
If hackers want to find vulnerabilities through which they can penetrate systems or through which they can bring malware into systems, artificial intelligence makes it very easy for them to search. Because AI can automatically examine many interfaces of the victim systems for vulnerabilities. If cybercriminals encounter vulnerabilities in this way, artificial intelligence is able to distinguish whether it can be used as a gateway for malicious code or to paralyze the system. Meanwhile, thanks to artificial intelligence, cybercriminals can even adapt dynamically.
Patrycja Schrenk explains: “If manufacturers react with security patches, for example, the intelligent malicious code automatically adapts so that it can still rage. Machine learning thus ensures that the malicious code continues to learn and can thus adapt to changes.“
AI is also often used in conjunction with malware that is distributed via e-mail: “Thanks to AI, the malware can imitate user behavior even better. The texts in the e-mails are written with such immense semantic quality that it can be extremely difficult for recipients to distinguish them from real e-mails. The artificial intelligence learns from mistakes every time and further optimizes its tactics with every further attack,“ explains Schrenk.
Extortion – for example, using ransomware, which is typically distributed via email-is currently considered one of the most common methods of attack. “For example, in order to extort executives or executives, sufficient information about the victims is needed. Cybercriminals now rely on artificial intelligence to obtain information. Using AI, for example, they search social networks, forums or other websites specifically for information about the target persons – and much more efficiently than it would be possible without AI,“ says Patrycja Schrenk. In addition, password guessing has also become easier:” Already today, such AI systems exist that successfully guess passwords through machine learning, ” adds the IT security expert.
But artificial intelligence also bypasses captcha, which actually act as spam protection: through image mosaics or simple equations, systems recognize that users are people and not machines. “Unfortunately, artificial intelligence breaks through this barrier very easily. Because machine learning feeds artificial intelligences with so many different images until they can automatically recognize them and solve captchas. This effectively undermines this safety mechanism and makes it impossible to distinguish between man and machine,“ says Schrenk.