Security

AI is Now Exploiting Known Vulnerabilities - and what you can do about it

Noah Simon

,

Head of Product Marketing

,

In a recent study from the University of Illinois Urbana-Champaign (UIUC), researchers demonstrated the ability for Language Learning Models (LLMs) to exploit vulnerabilities simply by reading threat advisories. While some are arguing that the sample size was rather small (15 known vulnerabilities), this study still raises very important implications for vulnerability management and remediation programs: the stakes have clearly been raised.

Understanding the Study

The study's findings are based on the evaluation of GPT-4's comprehension and application of information contained in threat advisories. Researchers tasked GPT-4 with identifying and exploiting vulnerabilities based solely on the content of these advisories, without any additional context or guidance. Remarkably, the AI system was able to successfully exploit a wide range of vulnerabilities, demonstrating its sophisticated understanding of security concepts and its ability to apply them in practice.

Other precedents exist

Of course, this is not the first example or proof of concept of AI being used by attackers. LLMs have been used to carry out phishing attacks at a much greater scale, and hack websites. These examples show that LLMs can be trained quickly by attackers - and now become an attacker with only a few prompts.

What AI means for Vulnerability Management and Remediation

AI is moving at a rapid pace. Many ideas fueled by LLMs quickly move from proof-of-concept to reality in a matter of days.

It’s always been known that unpatched vulnerabilities are a cause for many breaches. A 2022 study showed that 60% of breaches analyzed were the result of a known, unpatched vulnerability that was exploited. Moreover, other studies show that breaches stemming from known vulnerabilities can be more costly than other attacks, such as phishing.

Fueled by LLMs, vulnerability exploitation has now just become faster and cheaper.

So what learnings should security leaders take away from this study?

  1. Vulnerability disclosure could change (for better or for worse): given this news, how should the industry approach industry disclosure? This will be another important aspect to the ongoing debate on how vulnerabilities should be disclosed. It’s possible that many in the community may decide to slow vulnerability disclosure, which could have unintended consequences.
  2. Reducing remediation time is more important than ever. The mean-time-to-remediate (MTTR) vulnerabilities at many companies is still too slow - and often exceeds service level agreements (SLA) that are defined by company stakeholders. With LLMs able to exploit known vulnerabilities by the contents of their disclosure alone, teams need to quickly understand the impact of any new vulnerability and in the case of vulnerabilities that pose critical risks -  be able to formulate a remediation campaign in hours, not days.
  3. Security teams need to fight AI with AI: our CEO Merav Bahat recently told an audience at Fortune London that AI needs to be turned from a threat into an opportunity for security teams. While LLMs can be now used for vulnerability exploitation, they also can be used to make vulnerability prioritization and remediation faster, and more effective than ever before.

Dazz has been innovating with LLMs for a while now. The Dazz Unified Remediation Platform leverages LLMs to generate automatic fixes and actionable remediation guidance when direct fixes haven’t been identified. In addition to identifying the fix, Dazz automatically can identify and notify owners, speeding the remediation process significantly.

If you’re interested in seeing how Dazz uses LLM to help customers drastically reduce vulnerability remediation times, contact us today!

See Dazz for  yourself.

Get a demo