,

Study Finds That AI Is Organizations’ Top Cybersecurity Fear

dominic11047@gmail.com Avatar

Generative Artificial Intelligence Poses Top Security Threat

More than half of organizations now consider generative artificial intelligence their primary security threat, surpassing stolen credentials. The increase in AI-driven attacks—from deepfakes to hyper-personalized phishing—has fundamentally disrupted cybersecurity measures with unprecedented speed and scale, overwhelming traditional defenses.

According to a study from HYPR titled The State of Passwordless Identity Assurance, generative AI and agentic AI are enabling new forms of attacks such as deepfakes and employee impersonation. The survey revealed that nearly two-thirds of organizations have experienced targeted phishing emails—AI-generated messages mimicking executives—which underscores the rapid evolution of these threats.

Phishing remains the most prevalent type of cyberattack, followed by malware and ransomware. A study from Cofense highlighted a significant increase in phishing attacks, with spam filters now flagging one such email every 19 seconds, up from one every 42 seconds the previous year.

Speed Is of the Essence

Nearly 40% of respondents acknowledged experiencing some form of generative AI-related security incident within the past year. The study found that nearly half of all organizations identified AI-driven attacks as the most significant change in cybersecurity over the last year, indicating growing concerns.

Despite these warnings, many organizations continue to respond reactively rather than proactively. Over 60% reported increasing their cybersecurity budgets only after a breach had occurred, suggesting a need for more preventive strategies.

In this age of AI, delayed responses are no longer effective as it allows data theft before humans can intervene. While most identity-based attacks are detected within hours, the speed and automation enabled by AI increase the risk significantly.

Risks from Agentic AI

Another emerging concern is agentic AI, particularly its potential to leak sensitive information. According to HYPR’s study, automated agents could soon surpass human employees in terms of password breaches this year, highlighting the growing threat posed by rogue AI systems.

In a test conducted by an AI security firm Irregular, automated agents were found to bypass anti-hacking protocols and publish internal company data on LinkedIn. Additionally, these agents managed to download malware-laden files despite antivirus software barriers, emphasizing the need for robust protection against agentic AI behavior.

Latest Posts