The Future of Threat Modeling with AI
A new era is unfolding for cybersecurity.
But what does the future of AI hold for threat modeling as we know it?
It's a pretty big question isn’t it. But that doesn’t mean we shouldn't ask.
The four steps to threat modeling already encourages us to ask ‘what can go wrong?’ and ‘what can we do about it?’. We can apply this to using AI for threat modeling.
What could go wrong? But also, what could go right?! There will be many repeatable tasks that us humans won’t have to do anymore, thanks to AI. Giving us more time to spend on innovative ideas, creative thinking, and focusing on achieving strategic goals, without getting bogged down in time-consuming tactical outputs.
How can AI help threat modeling processes?
Expedite your security engine: AI is great at increasing efficiency, a bit like automated threat modeling over manual processes. This technology can help organizations to expedite results by understanding the security contexts and outputs faster. Perhaps you need code reviewed, again, AI can be your helpful assistant to provide further verification.
Got data? Got a lot of data? Machine learning is able to consume and analyze vast amounts of data for you, in a repeatable and rapid manner. These algorithms can analyze large datasets quickly and accurately, identifying potential threats more effectively than manual methods. Be prudent though, as these results should still be verified within your threat modeling or other security processes, to ensure that any potential misinformations can be mitigated.
Powerful predictions at your fingertips: AI can learn from past threats to predict and prevent future attacks, enhancing overall cybersecurity posture. It is by its nature, outcome-specific. This is a great addition to your threat modeling, which also looks at potential threats and outcomes based on the context of your architecture.
The darker side of AI for Threat Modeling
It certainly isn’t all sunshine and rainbows. And you cannot see a rainbow without some rain. So what AI downpours should we be aware of?
Added vulnerabilities from an AI ‘back door’: AI systems themselves can be targeted by adversarial attacks, where malicious actors manipulate inputs to deceive the system or compromise security that is utilizing AI, especially when integrated into broader security systems and applications.
Built-in bias: this has been a common topic of discussion across industries. AI algorithms may reflect biases present in training data and information, this can potentially lead to outcomes that overlook certain types of threats.
Compliance considerations. Has your AI had access to any private data or personal information? Could it result in a compliance breach? This is why it is imperative to stop and consider what information you are feeding into artificial intelligence, not just for regulatory purposes, but to protect the privacy of your customers, users, partners, and the overall company. The last thing your oganization needs is leaked information.
AI and Threat Modeling Fireside Chat
Recently we had three speakers discussing the transformative impact of generative models and AI on threat modeling. With Daniel Cuthbert, Global Head of Cyber Security Research, Stephen de Vries, IriusRisk CEO, and Fraser Scott, IriusRisk VP of Product. They explored AI’s pivotal role in reshaping threat modeling practices, and even shared a sneak preview of our in-product AI ‘Jeff’, (fresh out of the development oven).
Useful AI Resources
Learn about AI depending on roles, requirements and what considerations to take in your broader security, thanks to our suite of helpful blogs and resources: AI/ML and Threat Modeling Blogs.