Cybersecurity Challenges in the Age of Generative AI


By David Bader, Distinguished Professor and Director, Institute for Data Science, New Jersey Institute of Technology

Cybersecurity professionals will not only have to discover malicious events at the time of occurrence, but also proactively implement preventative measures before an attack. For these professionals, the significant challenge will be protecting against new behaviors and methods that they are not yet familiar with.

Evolving Attack Vectors

Bad actors can use AI to enhance dynamic and sophisticated cyberattack methods, including advanced persistent threats (APT), deepfake attacks, DDoS attacks, phishing, and targeted malware. As AI continues to advance, existing attack vectors will be deployed at exponentially increasing speed and scale.

For instance, phishing is a prevalent and well-known way for attackers to cast a wide net and reach thousands of people at once. Over 500 million phishing attacks were reported in 2022, resulting in a total loss of over $52 million in the U.S. alone. Spam filters and broader awareness of these scams have traditionally helped many avoid the generic and poorly written requests, but with AI, these emails will no longer be read in broken English when sent en masse.

An advanced form of phishing, spear phishing attacks target specific individuals or groups to obtain sensitive information. Typically, attackers have to construct phishing attacks against their victims carefully, customizing their scams with relevant information to induce targets to reveal confidential information such as user credentials. Due to the time needed to prepare, cybercriminals have not been able to traditionally operate spear phishing attacks on a large scale.

As the capabilities of AI continue to advance, both its potential for positive transformation by cybersecurity professionals and its potential for misuse by malicious actors become increasingly evident.

However, GenAI now enables attackers to organize and deploy these highly targeted attacks at exponentially faster speed. Attackers can construct a massive number of phishing attacks with a click of a button, directing the AI to write customized emails for each victim with highly sophisticated and fluent impersonations of our bosses, co-workers, and customers. Bad actors who have collected personal information can use AI technology to parse and organize cultivated information in no time, then customize individual attacks to unsuspecting victims. For example, a scammer knows a new employee has joined a company and will send them an email claiming to be an HR representative needing to re-verify information, tricking them into handing over their Social Security number.

Social engineering attacks will also be further devastating due to AI’s increased capabilities of camouflage. Attackers can use generative AI to create real-time conversations, mimic voices, alter videos, and generate realistic images that are virtually impossible to differentiate from reality. Deepfake technology’s chilling ability to impersonate real people can deceive even the savviest cybersecurity professionals.

In an example of social engineering attacks, victims can get lured into connecting with online communities of “friends,” only to realize later that they’ve been sharing highly sensitive information (such as a company’s confidential information, product plans, and customer records) with malicious actors.

In malware-focused attacks, AI can also be used to find vulnerabilities in an organization’s IT infrastructure and launch APT or DDoS attacks quickly targeting their Achilles heel and remaining undetected for longer periods of time. Rather than using a single instance of malware, generative AI is capable of constructing tailored attacks against specific targets at an unprecedented rate. This is akin to going from testing a single key on thousands of locked cars in a parking lot to, thanks to AI, now being able to cut a working key for every car in the lot.

Adapting Cybersecurity Defense

From a risk management perspective, it’s important to recognize that AI creates novel attack vectors for existing threats as well as the potential for new threats. Adaptive responses and new controls will be the new norm for evaluating an organization’s cybersecurity risk posture. However, do not overlook the importance of having robust and foundational cybersecurity protections effective against traditional risks.

The first step should be to understand the organization’s people and infrastructure, as well as monitor it for anomalies that could be an indication of an attack. Many of the same cybersecurity protections for traditional threats will also be important to employ against AI-generated threats, so keep up a strong security posture. Regular training among employees to recognize these new types of threats will also go a long way to making the organization safer against these attacks.

Additionally, harnessing AI and automation to strengthen defense mechanisms will be part of adapting to the evolving landscape of AI-related attacks. According to IBM’s 2023 Cost of Data Breach Report, organizations with extensive use of both AI and automation experienced a data breach lifecycle that was 108 days shorter compared to studied organizations that have not deployed these technologies (214 days versus 322 days).When organizations discovered breaches themselves, they experienced nearly $1 million less in breach costs than breaches disclosed by attackers.

As the capabilities of AI continue to advance, both its potential for positive transformation by cybersecurity professionals and its potential for misuse by malicious actors become increasingly evident. Whether an attack is generated by AI or not, the best CISOs and risk managers will follow the timeless practice of applying the latest security patches and minimizing potential risks for systems and data to be compromised by hackers.

Dr. David Bader, a Distinguished Professor and founder of the Department of Data Science in the Ying Wu College of Computing and Director of the Institute for Data Science at New Jersey Institute of Technology.