
Artificial intelligence is both a curse and a blessing for cybersecurity efforts. On the one hand, cybercriminals can use the technology to launch ever-more sophisticated attacks. On the other, security teams can leverage AI to better detect potential threats.
CFOs looking to help bolster cybersecurity in their organizations need to have a thorough understanding of both sides of the coin. Here’s a look at how AI is having an impact on cybersecurity, for better or worse.
Attackers are using AI to generate more phishing and spear-phishing attacks than ever, and they are harder than ever to detect and stop by enterprise staff, says Dan Lohrmann, field chief information security officer at technology services and consulting firm Presidio.
As agentic AI emerges it will become a new cyber threat vector, Lohrmann says.
“Agentic AI, capable of independently planning and acting to achieve specific goals, will be exploited by threat actors,” Lohrmann says. “These AI agents can automate cyberattacks, reconnaissance and exploitation, increasing attack speed and precision.”
Malicious AI agents might adapt in real-time, bypassing traditional defenses and enhancing the complexity of attacks, Lohrmann says.
AI-driven scams and social engineering will surge, Lohrmann says. “AI will enhance scams like ‘pig butchering’ — long-term financial fraud — and voice phishing, making social engineering attacks harder to detect,” he says.
AI helps tailor phishing messages, making them highly believable and leading employees to falsely assume they’re from trusted colleagues, friends or family, says Mike Cullen, principal at advisory, tax and assurance firm Baker Tilly. “The technologies used in these sophisticated attacks pose a huge threat to organizations, especially those lacking proper employee cybersecurity awareness and training,” he says.
One of the most significant AI-based threats is deepfakes and impersonation, “Sophisticated AI-generated deepfakes and synthetic voices will enable identity theft, fraud and disruption of security protocols,” Lohrmann says.
“Bring-your-own-AI”, where staff brings their own free tools and/or use unauthorized paid tools and AI apps that are not secured, will accelerate, Lohrmann says. “There is an explosion of ‘shadow IT’ or ‘shadow AI’ that can lead to sensitive data being put into these consumer apps,” Lohrmann says.
This unauthorized use of AI could cause privacy and security incidents, loss of control of personally identifiable information, Lohrmann says.