One of the points is that the cost of getting a cross functional team together for a threat modeling activity can be high. If you’re a startup working on a single product, then a fully manual threat modeling exercise every few weeks is probably practical and cost effective – but that approach can run into problems of scale when there are 500+ products all undergoing continuous modification. If threat modeling is to become an integral part of product development, then the cost has to be addressed.
Accuracy vs Resources required
If you’ve been involved in some threat modeling sessions then you’ve probably had the experience that identifying the first batch of threats for any system is usually quick and easy and whoever is manning the white-board has their hands full keeping track of the threats identified by the team. Let’s call this the easy part, and at this stage a big picture view of the architecture is usually sufficient.
Then the threat torrent slows down – and the discussion speeds up with more disagreement about what the architecture actually is – and which are real threats worth worrying about. This is the hard part, and to do it well we need to have a:
- Clear and accurate view of the architecture
- Understanding of the business goals and constraints
- The technical security know-how to identify threats in the architecture
Optimising easy for speed
Step 1 of any threat modeling method is to understand what you’re modeling. Having an accurate view of the architecture is essential in building an accurate threat model. But since I’d like to focus on optimising the easy part of the modeling activities – I’d ask a different question: How simple an architectural model can we get away with to still build a valuable threat model? My contention is that for this part we don’t need a detailed architecture diagram and a simpler textual description would be good enough, e.g.:
“We’re building a mobile payment system using REST services on a service tier build on Spring Boot, talking to a relational DB both deployed on AWS, and we’re using a thick client on the mobile device where we’re also storing the payment card details. Cardholder data is processed on the service tier and stored in the DB. Users are authenticated using…etc.”
Once we have this description of the architecture we can then work on optimising the identification of threats, and here checklists and templates can be very useful. They work well for this phase because what makes the threats easy to identify is that they’re generally obvious and well known for the given architecture. For example, the OWASP ASVS project is an excellent list of controls to implement for Web and Mobile applications. Although it’s not a threat model as such, it is a good list of controls and it’s easy to derive the corresponding threats for each control by simply appending: “…because if we don’t…” to every control statement.
The challenge with using these blanket checklists that cast wide nets over broad architectures is that the time for generating the checklist is short (or near zero), but the time to find out whether the threats apply to your architecture is longer, because many may not be relevant. And if you (as a member of the security team) offload this extra work on your colleagues in the dev teams – you may start spending capital that’s in short supply.
A step up from this is to build smaller templates that apply to more specific parts of different architectures and that can be re-used in different threat models – Excel or copy and paste on a wiki could fill in the tooling here.
Our approach with IriusRisk takes this concept further, using even smaller risk patterns that apply to components and uses of those components and to assemble these patterns using a rules engine, with a questionnaire system on the front end. Risk Patterns are essentially building blocks that are assembled by rules to generate the model. They allow us to group threats together based on where we usually find them.
This means that with a short description of the architecture and a set of assembled templates, we could get an initial threat model at a very low cost. This is a conscious trade-off between accuracy and cost The result of this could then determine whether we need to spend more time with manual modeling – or whether the risk profile is such that we don’t need any further architectural analysis.
The hard part
We still need human intelligence and collaboration for the hard part, firstly to actually agree on and understand the architecture, and then to identify threats in that architecture. The role of diagrams and a workshop with a cross functional teams is indispensable here – and Adam’s blog entry has a thorough list of the benefits of this approach. (It’s incredible that a threat modeling workshop is sometimes the first time that the stake holders and development team have come together to actually discuss the architecture face to face). These workshops are the only way to find the interesting threats that don’t fit into any template or checklists.
Risk Patterns can also be useful during this phase to save time when we find a familiar architectural pattern. For example, if after extensive discussion we determine that there’s an undocumented administrative login to our REST based services, we can just lookup the patterns that match that scenario: the pattern for single factor authentication on a web service, and the pattern for a web service that provides administrative access.
And once we have a threat model, What then?
Once we have a prioritised list of threats, risk responses and our planned controls – we need to do something with it. Managing this on a wiki, in Jira or even in Excel is feasible for a small number of products, but we can quickly run into problems of scaling such a management system with more than a few tens of models. This is another area where I believe that the right tooling can help us get a handle on managing and tracking the status of identified risks as they progress through the development process.
I view tooling as a complement to manual approaches – it can play an important role in reducing the costs of some parts of the threat modeling process, notably the easy to identify risks associated with architecture choices – but it cannot entirely replace thinking. Tools can also play an important role in managing the sheer volume of data generated by a threat model and in tracking the state of that data over time. For complex systems and those that require a higher level of security assurance, diagrams and collaboration are indispensable. Tooling here can help cover the bases and free up more of our time for that valuable thinking work.