Contents

A Primer on AI for Security Remediation

Generative AI is not ready to fully automate threat detection and remediation processes. Still, it’s worth considering in order to dramatically improve security and development teams’ productivity.

Generative artificial intelligence (AI) is everywhere. It makes creators of content far more efficient, but in doing so, it opens the door to an array of abuses. Schools and content publishers around the world are now grappling with how they can ensure that work belongs to the supposed creator. And artists across all genres are battling AI-powered plagiarism.

That same dichotomy of pros and cons carries over to the realm of cybersecurity. As society struggles to determine how to ethically leverage generative AI, security teams must understand that these technologies create new risks to corporate networks. They can make what would be clunky attempts at phishing appear much more sophisticated, and can dramatically increase the volume of threats a bad actor is capable of perpetrating in a period of time.

At the same time, though, generative AI technologies provide new opportunities for recognizing and responding to attacks. Security professionals who leverage them can be better prepared and build more sophisticated defenses. In cloud environments, where attacks happen rapidly, AI has the potential to help security teams keep pace with attackers, or even head them off before they do any damage.

AI has the potential to help security teamskeep pace with attackers, or even head them off before they do any damage.

Can autonomous cloud security remediation work?

One way organizations can leverage generative AI in protecting corporate assets is by deploying autonomous cloud security remediation workloads.The idea of autonomous remediation generates a negative reaction among many security professionals.

They look back on their organization’s recent history and see too many instances in which patches and other vulnerability remediation efforts broke the very applications they were intended to protect. They do not want to head down that road again and worry that ceding control of remediation efforts to a machine increases the chance of doing just that.

In fact, security professionals who have to choose between patching a vulnerability in a way that risks creating a problem with the underlying application, and leaving the vulnerability as is, often choose the latter. For proof of this attitude, look no further than bank ATMs. Many still run on the Windows XP operating system because the bank’s security team chooses not to risk upgrades that would fix vulnerabilities but might prevent the ATM from functioning properly.

Artificial intelligence has the potential to automate cloud security remediation, but turning remediation efforts over to AI requires more trust than most security teams have to offer. Even human remediations would make most security professionals nervous if they didn’t go through the company’s typical, human-run testing and quality assurance (QA) processes, so it is no wonder that an automated approach to remediation is met with skepticism. But the cost of human testing and QA for every vulnerability fix is costly, and the prospective cost of failing to remediate those problems is even higher.

The latest generative AI capabilities offer to accelerate the resolution of vulnerabilities without removing human judgement from the process.

A compromise position is available. The latest generative AI capabilities offer to remediate those issues with automated vulnerability remediation. That’s because they accelerate there solution of vulnerabilities without removing human judgment from the process. They analyze huge amounts of data and provide guidance on which remediation techniques would be most likely to close a particular security gap, without actually executing the fix themselves.Thus, although they save developers the time they would otherwise spend on analyses, these tools enable them to review suggested code fixes and determine whether they make sense in the context of the company’s codebase.

This scenario carries the same risk of an erroneous change to the code as would any other process that relies on human decision-making. It is possible for the machine to make a mistake in its recommendations, and a developer has the opportunity to catch and correct that error. In fact, with the AI tool performing the behind-the-scenes analysis work, the humans involved have considerably more time to consider and catch any potential problems with the suggested code fix. Plus, there is no longer a risk of error in the mechanics of the analysis because all the back-end processes are machine-managed.

How Dazz Unified Remediation Platform improves human productivity

This concept of partially automated, and much more efficient, cloud remediation is exactly what Dazz Unified Remediation Platform is designed to support. Dazz does not fully remove the human touch from all code fixes, but uses AI to automate the biggest pain points in processes from vulnerability detection to resolution.

The Dazz Unified Remediation Platform engine connects to an organization’s existing security tools and pipelines using application programming interface (API) integrations. Through these connections, it pulls into the Dazz platform extensive data on internally developed code, artifacts, deployments, and cloud environments. Then the solution orchestrates, triages, prioritizes, and consolidates data coming from the entire code-to-cloud pipeline, including applications, security controls, and cloud production and development environments. It tracks the entire resolution process and delivers business insights in natural language.

Machine learning and generative AI come into play when Dazz Remediation Engine discovers shared root causes across security alerts coming from any and all of the different applications and platforms to which it connects. The Dazz platform includes the OpenAI GPT large language model (LLM). Its powerful LLM capabilities, combined with an AI-driven ability to understand context, enable Dazz to automatically extract deeper and more nuanced insights from unstructured data sources.

As a result, Dazz Unified Remediation Platform is able to recommend steps developers should take to resolve the root causes that it uncovers for alerts. For example, Dazz can efficiently suggest successful remediations for infrastructure as code (IaC) alerts, for which the development team may have large numbers of possible fixes to choose among. The Dazz RemediationCloud data platform includes a knowledge graph that will contextualize the IaC alerts and correlate specific incidents with remediation strategies accumulated over time.

Currently, Dazz Unified Remediation Platform is leveraging GPT4 in private preview. Dazz will continue to explore how best to harness the power of generative AI and LLMs to streamline cloud vulnerability remediation.

Automated remediation is a risk management issue

Concerns about automated remediation are, ultimately, not a cloud security problem but a risk management problem. And because of its strictly behind-the-scenes workloads, Dazz Unified Remediation Platform provides code fix suggestions that the security and development teams can trust.

Consider a container running in a production environment that is susceptible to a particular common vulnerability/exposure (CVE). In order to resolve the vulnerability, the security team needs to determine which applications run on top of that container, and what kinds of dependencies they have, to link between what’s happening in the code environment and how it is represented in the code. They need to find the origins of the artifact via a root cause analysis.Then, when they have those answers, they can turn their insights over to the development team, who can fix the problem, upgrade the package, change the configuration in code, redeploy the application, and verify that the change fixed the vulnerability.

These are clearly not trivial tasks - but that’s the beauty of incorporating generative AI into the process. Dazz Unified Remediation Platform automates the collection of all the needed data. This is not as simple as it may sound; it involves a lot of nuance. Rather than having a single correct answer to each root cause and fix, development teams are often working to determine the “most right”answer for the particular circumstances of the organization, and cloud app, in question.

As Dazz is trained with more and more context, its AI capabilities begin to provide better results.Those improved results enable the security and development teams to deploy applications more safely, making sure to eliminate the known vulnerability. And then, after the fix is implemented, the security team can rescan the environment to see whether the vulnerability is truly gone.

The limitations of generative AI

Generative AI is amazing, but it cannot be expected to automatically fix all of a company’s vulnerabilities- at least not in the very near future. An analysis ofCVE data performed by The Stack indicates that26,448 software security flaws were reported in 2022, with the number of critical vulnerabilities up 59% from2021, to 4,135. In order to independently remediate so many issues, an AI model would need to be retrained on new data every single day. This makes it problematic to build a robust model that is scalable enough for the full-automation use case.

26,448 software security flaws were reported in 2022, with the number of critical vulnerabilities up 59% from 2021, to 4,135.

Instead, generative AI in cloud security remediation will likely develop along a similar evolutionary path to that of autonomous vehicles. It will first be introduced with strong guard rails - i.e.,providing recommendations while leaving humans to actually implement the fixes - until it builds widespread trust. The industry will take a few years to reach that point.

In the meantime, tools like Dazz Unified Remediation Platform accelerate the processes around vulnerability remediation. That frees up security and development staff for other value-added workloads and isa core benefit of exploring partially automated remediation.

Thank you for your interest in:

A Primer on AI for Security Remediation

Download Now

A Primer on AI for Security Remediation

October 31, 2023

Resources

There’s more to explore.

No items found.

See Dazz for ᅠyourself.

Get a demo