Home Bots & Business ‘AI Agents Will Reduce the Time It Takes to Exploit Account Exposures by 50%’

‘AI Agents Will Reduce the Time It Takes to Exploit Account Exposures by 50%’

AI Agents Will Increasingly Exploit Weak Authentication by Automating Credential Theft and Compromising Authentication Communication Channels

by Marco van der Hoeven

By 2027, AI agents will reduce the time it takes to exploit account exposures by 50% according to Gartner. “Account takeover (ATO) remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering and malware,” said Jeremy D’Hoinne, VP Analyst at Gartner. “Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms.”

AI agents will enable automation for more steps in ATO, from social engineering based on deepfake voices, to end-to-end automation of user credential abuses. Because of this, vendors will introduce products web, app, API and voice channels to detect, monitor and classify interactions involving AI agents. “In the face of this evolving threat, security leaders should expedite the move toward passwordless phishing-resistant MFA,” said Akif Khan, VP Analyst at Gartner. “For customer use cases in which users may have a choice of authentication options, educate and incentivise users to migrate from passwords to multidevice passkeys where appropriate.”

Defending Against the Rise and Expansion of Social Engineering Attacks

Along with ATO, technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts 40% of social engineering attacks will target executives as well as the broader workforce by 2028. Attackers are now combining social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls.

Although only a few high-profile cases have been reported, these incidents have underscored the credibility of the threat and resulted in substantial financial losses for victim organisations. The challenge of detecting deepfakes is still in its early stages, particularly when applied to the diverse attack surfaces of real-time person-to-person voice and video communications across various platforms.

“Organisations will have to stay abreast of the market, and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques,” said Manuel Acosta, Sr. Director Analyst at Gartner. “Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step.”

Misschien vind je deze berichten ook interessant

preload imagepreload image