Back to All Research

How Security Leaders Are Reacting to Generative AI: 98% Worried About the Risk

To understand how organizations are responding to the evolving threat of AI, we surveyed 300 cybersecurity leaders. Here are a few key takeaways.
October 24, 2023

The rise of generative AI has been nothing short of meteoric. Tools like ChatGPT and Google Bard are becoming increasingly popular among people who wish to use them for legitimate purposes—inspired by the efficiency they create. However, cybercriminals are also embracing this technology. Generative AI is likely responsible for the recent surge in both the volume and complexity of email attacks that organizations have encountered in recent months. And this is only the beginning.

Attackers are already using it to craft more convincing phishing emails free of misspellings or grammatical mistakes, launch highly persuasive business email compromise attacks, and generate polymorphic malware. Unfortunately, the threat is poised to become more severe in the future—and security leaders are taking note.

To gain insight into how leaders are approaching this threat, particularly within the realm of email security, we surveyed 300 cybersecurity stakeholders from organizations of various sizes and industries. Here are some of the results.

98% of Security Stakeholders Concerned About Cybersecurity Risks of Generative AI

As generative AI has dominated the headlines in recent months and with AI-generated attacks already targeting organizations, we anticipated that security leaders would report concerns regarding the technology’s potential to intensify email attacks. That’s exactly what our survey found.

Gen AI Security Risks Concerns Chart

When asked how concerned stakeholders within their organizations are about the security risks currently posed by generative AI, the majority of respondents voiced significant levels of concern. Virtually all of the respondents (98%) indicated at least some degree of concern, and nearly three out of four (73%) characterized their concerns as "significant" or "extreme." Only 2% of respondents reported having no concerns whatsoever.

These worries did not vary substantially across organizational size. Among companies with fewer than 2,500 employees, 71% of respondents were “significantly” or “extremely” concerned. This percentage increased to 74% for midsize organizations with 2,500-9,999 employees and 72% for enterprises with more than 10,000 employees.

92% Agree That AI-Driven Security is a Necessity

Security stakeholders are aware that the rise of generative AI will require them to implement new tools and deploy more advanced email security capabilities. Further, there’s widespread agreement that there is a need to specifically implement AI-driven security solutions to stay ahead of threats that will only grow in volume and sophistication over time.

AI Driven Security Must Have Chart

A clear majority of survey participants (92%) see value in using AI-driven security solutions to defend against today’s AI-generated email threats. And less than one percent of respondents strongly disagree with that viewpoint. Additionally, most are confident that AI will have a major impact on their organizations’ security strategies within the next few years.

Senior executives are particularly likely to hold this viewpoint, with a full 96% of the CIOs, CISOs, or higher-level technology leaders participating in this survey saying they strongly or somewhat agree.

94% Believe that AI Will Have a Major Impact on Their Future Security Strategy

In order to gain a deeper insight into how the emergence of generative AI is influencing the way leaders and professionals prepare for the future of their cybersecurity initiatives, we asked survey participants whether they believe AI will play a significant role in shaping their organization's cybersecurity strategy within the next two years.

AI Impact on Security Plans Two Years Chart

Given the results of the previous question, it is no surprise that more than 94% said that AI will indeed have a major impact on their cybersecurity strategy over the next two years. Individuals who have hands-on involvement with these technologies as well as those who are responsible for establishing long-term strategic objectives both understand the transformative effect of generative AI on cybersecurity.

Furthermore, over 97% of executive leaders acknowledge the potential of AI in cybersecurity, and a resounding 100% of email and messaging managers anticipate a shift toward more AI-centric solutions. This is also true for frontline information security practitioners like security analysts and incident responders—97% of whom agreed with the statement.

The Need for Good AI to Combat Bad AI

For better or worse, generative AI and the attacks it enables are here to stay.

Unfortunately, traditional email security tools like secure email gateways, designed for on-premises servers and rule-based defense, struggle to combat modern threats. They primarily rely on identifying known indicators of compromise, making them ill-equipped to combat the full range of attacks—especially with the rise of social engineering tactics and generative AI.

In contrast, integrated cloud email security solutions (ICES) employ machine learning and behavioral AI to baseline normal behavior and detect anomalies through identity modeling, behavioral graphs, and content analysis. This AI-based technology considers factors like relationships, geolocation, device usage, and login patterns to spot malicious behavior even when no traditional indicators of compromise are present.

Even if a SEG could consistently detect when an email was created using AI, simply blocking all AI-generated emails is not a viable solution, as this could disrupt legitimate business communications that rely on generative AI for tasks like drafting marketing content, customer responses, or managerial updates.

Thus the behavioral anomaly-based approach employed by ICES is likely more effective against AI-generated email attacks than the signature-based methods of secure email gateways. This approach also minimizes false positives by intelligently screening email content for suspicious and malicious activity, rather than solely relying on AI indicators.

Abnormal’s AI-native email security platform leverages machine learning to stop sophisticated inbound email attacks and email platform attacks that evade traditional solutions like SEGs. The anomaly detection engine uses identity and context to assess risk in every cloud email event, preventing inbound attacks, detecting compromised accounts, and remediating malicious emails. By using defensive AI to detect attacks, Abnormal ensures that your organization stays protected from today’s most pertinent threat: malicious AI.

For additional insights into how security leaders are responding to the threat of generative AI, download the report.

Or to see how Abnormal can protect your organization from AI-generated threats, schedule a demo.

B AI Security Leaders Reacting Generative AI

See How Abnormal Stops Emerging Attacks

See a Demo

Get the Latest from Abnormal Intelligence

Subscribe to our monthly newsletter to receive the latest insights from our team directly in your inbox.