In today’s digital landscape, it has become increasingly difficult to address the multitude of cybersecurity risks threatening organizations. This is what we solve at Dazz. Our end-to-end integrated approach enables us to gain understanding of the risks involved at every stage. By seamlessly integrating with all security tools, we can effectively monitor and analyze the diverse security findings that arise during development, deployment, and production.
However, our goal goes beyond mere observation. We strive to automate the remediation process of these risks, systematically.
Undertaking this challenge comes with its fair share of complexities, involving numerous moving parts and intricate interactions between them. To succeed, we must harness the potential of powerful tools. OpenAI’s GPT is one such tool.
At Dazz, machine learning and data science play an important role in designing our technology stack. The introduction of LLMs (Large Language Models) has presented us with an exciting opportunity to enhance our existing ML capabilities. GPT’s powerful language modeling capabilities, combined with its ability to understand context and perform complex language tasks, have elevated the possibilities of our data science endeavors. By integrating GPT into our workflows, we can now extract deeper and more nuanced insights from unstructured data sources.
Dazz has always been a powerful data platform to deliver remediation insights. We consider GPT to be an exceptional automation tool for our data platform, particularly in scenarios with a vast number of possibilities. One such example is in remediating Infrastructure as Code (IaC) alerts, where the number of potential remediation actions — the “action space” — is large.
To tackle the challenge of navigating this expansive IaC remediation “action space,” we have developed an innovative approach that uses our data platform’s knowledge graph and harnesses the capabilities of GPT.
Through meticulous curation and continuous learning, we have distilled the essence of alerts, contextualizing them within our knowledge graph. This knowledge graph serves as a repository of insights, and remediation strategies accumulated over time.
By querying GPT with the entities from our knowledge graph, we can extract valuable insights and obtain actionable remediation details, in context, without exposing customer data to third parties.
We are rolling out our first version, based on GPT4, in private preview. This is only the tip of the iceberg in exploring the capabilities of LLMs and how they generate unique insights on top of the Dazz data platform. We are collecting data to improve the model and optimize for additional use cases.
At Dazz, we’re committed to remediating cloud and application security issues. We keep improving our product to make that process easier, faster, and more cost effective for our customers. While powerful, GPT is not a silver bullet, but a force multiplier that amplifies our capabilities. The true value lies in analyzing the code-to-cloud pipelines, and LLMs can augment that process.
Our unique position as a fully-integrated platform allows us to capitalize on the potential of LLMs in providing actionable remediation.