Threats in the requirements phase¶
As mentioned in the introduction, finding threats is central to specifying security (requirements). During the requirements phase, we are deciding what to build, and not yet how to build it. Threats/security requirements at this point will not be very technical, except for the technical requirements dictated by the environment and compliance. Finding threats is a brainstorming process that asks for knowledge and creativity. Not everybody has this knowledge, and brainstorming processes can quickly become chaotic. A structured approach helps us to find threats without losing the overview, and lets us set boundaries that tell us when we have done enough (exit criteria). I will first discuss the STRIDE model, which is very useful for such an approach. Then I will sketch a structured approach for finding security requirements at all levels: the business level, the user interaction level and the system level.
STRIDE¶
STRIDE is an acronym developed by Microsoft to help you think about threats at an abstract level. The acronym describes six different attack types: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. While the original paper by Kohnfelder and Garg [KohnfelderGarg] talks about attack categories, I believe that it is difficult to categorize attacks 1. This does not stop us from using STRIDE as a brainstorming tool. In fact, the overlap between the attack types will increase our chances of finding a threat: if one attack type does not make us think of a threat, then perhaps another attack type will.
Every attack type corresponds with a security property, which the attack threatens [Shostack].
The table below show the attack types, and their corresponding security properties.
Attack type |
Description |
Security Property |
---|---|---|
Spoofing |
pretending to be someone else |
Authentication |
Tampering |
modifying data (in transit or in place) |
Integrity |
Repudiation |
denying a certain claim |
Non-repudiation |
Information disclosure |
seeing what you are not allowed to see |
Confidentiality |
Denial of service |
making something unavailable |
Availability |
Elevation of privilege |
doing what you are not allowed to do |
Authorization |
Finding business level security requirements¶
At the business process level, the central question is: “what are we afraid of?”. Some methods to find threats will suggest to look at your attackers and assets — things you want to protect — but while this may help to create general security awareness, it is less useful for finding threats [Shostack]. Asking what we are afraid of triggers people to come up with potential attack scenarios, in other words, threats. The more knowledge and creativity you can put into this process, the more threats you will find. But some of the project stakeholders may be unaware of what an attacker can do. STRIDE can help here, at least to explore ‘what-if’ scenarios. Let me give an example. I once attended a security awareness session in a company that publishes a product catalogue yearly. Most employees where quite relaxed about the security of the catalogue: if a hacker steals the catalogue a few days before publication, people will know the prices earlier, which is not a problem. When pointed at the possibility that the same hacker may modify those prices without anybody noticing, their attitude changed, because this would mean losing money! STRIDE would have clarified this very quickly.
Finding user interaction level security requirements¶
At the user interaction level, we want to know which security properties we want to hold in our system when executing some part of its functionality. For example, do we need authentication or authorization for a particular operation? This time, the STRIDE properties can help us here.
[Shostack] describes the ‘STRIDE requirements’ approach. In the Framework Secure Software [FFS] I described how to apply this systematically to a system’s functional requirements. Here I will further refine the approach.
The STRIDE requirements analysis needs the functional requirements of the system. The form of these functional requirements is irrelevant, as long as it is clear what actors exists in the system and what they must be able to do. Working with incomplete requirements will yield incomplete security requirements, therefore we must first check whether our requirements are complete. We can use several checks.
Checking requirements completeness¶
We can analyse requirements in many ways, for correctness, conflicts, or whether they are SMART 2. For completeness, you can use the checks that follow. There may be other or better methods to do this, so do not regard this as a complete (forgive the pun) list.
All actors¶
First of all, did we mention all actors? We may have forgotten non-essential users of the system, such as administrators, or have left them out to keep the requirements simple. But if the system must offer functionality for those users, we must include them as actors and what they do as functional requirements.
CRUD check¶
Next, do we know everything about the information in the system? We easily forget the more mundane operations on information in the system. The CRUD check is a quick method to find missing operations. For every object in the system, we check whether and how the system must create, read, update or delete it. Sometimes such a check does not make sense, but it may just point us at a missing requirement. For example, if we require a user to authenticate with a client certificate, we may not have specified how to create a client certificate. Maybe some other system creates the certificate, or maybe we have overlooked this operation.
Alternative flows¶
Third, did we consider alternative flows? If we assume that the system always follows a certain flow, the implementation may be incorrect under certain circumstances. In the worst case, an attacker is able to create such circumstances and abuse the implementation. Specify preconditions and consider what can go wrong.
Analysing operations with STRIDE requirements¶
For every action or operation in the system, ask what security properties you want, or equivalently, what threats you do not want. Below, I have listed questions for every STRIDE property, and suggested some mitigations. I have based the questions on [Shostack] and [FFS].
Spoofing¶
Authentication proves that you are who you say you are, or expected to be. It requires identification as a basis for the authentication method. Ask the following:
Do we need authentication for this operation? Consider both directions (must Alice authenticate to Bob, and Bob to Alice?).
What kind of authentication? Single-factor or multi-factor, and how strong?
What is the basis for the authentication, i.e. how do we identify users before we give them access to the system? Note: this may be out of your software’s scope.
The most common authentication form is via user credentials, such as a username and password. The popularity of web applications and the need to connect them has led to single sign on systems, which delegate the authentication to another party, via technologies as SAML, OpenID Connect and JWT.
Tampering¶
An attacker can compromise a system’s integrity by changing the content of data (e.g. changing an amount in a bank transfer) or by making the data invalid (e.g. changing the structure of the data). Relevant questions are:
What conditions exist regarding data integrity before, during or after this operation?
Does the data need to adhere to a certain format?
The specification of data formats translates directly to input handling security requirements. Cryptographic checksums or other integrity protections can prevent data from being changed.
Repudiation¶
Non-repudiation prevents that someone can deny a certain claim. While non-repudiation is a legal problem, technology can support its solution. Ask the following:
Is it important to record/log this operation?
Does a party need to confirm this operation?
Logging provides non-repudiation, but the logging party has to be trusted. Digital signatures require the other party to confirm the operation and are therefore stronger.
Information disclosure¶
Information can leak via the software’s functionality (e.g. a screen showing social security numbers), via communication (e.g. transmission of a password) or via storage (e.g. a secret key stored in a file). Ask the following questions:
Is the information that is processed confidential? How about meta data or the fact that the operation is executed?
Who is allowed to see the information/meta data?
Possible mitigations are encrypting data, hiding data (e.g. steganography), or not exposing the data in the first place (e.g. via access control, or zero knowledge proofs).
Denial of service¶
Availability is always a bit tricky to specify. Since operations are often combined in one system, they share the availability risk for most denial of service attacks. Therefore you can often refer to a general service level agreement (SLA), which states the allowed down time of the system. However, denial of service attacks exist that target a single operation in the system. In that case, the availability risk may differ. Ask the following:
What are the losses if the operation cannot execute? How long before this becomes unacceptable?
What are alternative ways of performing the operation (a fallback strategy)?
Note that a denial of service attack is not only sending a lot of data to an application. Performing many operations by the software, or performing a specific action with cleverly manipulated input can make it consume resources as well. The latter attack forms are more relevant to software security than the former, which can be mitigated at the network infrastructure level. Denial of service attacks can be mitigated by limiting the consumption of resources, redundant implementations, and fallback systems.
Elevation of privilege¶
Authorization checks whether something is allowed. This requires authentication of some sort, to establish who wants to perform an operation.
Role based access control (RBAC) grants access based on a user’s role. However, authorization is more complicated than that. I may be allowed to perform a certain operation, but I may not be allowed to do that on all objects in the system. Therefore, we need object authorization as well. It is easy to forget object authorizations. I have even seen them missing in money transfer operations of e-banking applications, allowing you to steal money from someone’s bank account (by the way: if you want to try this, be aware that it is probably a crime and that banks often have other ways to mitigate these problems, for example fraud detection).
In attribute based access control (ABAC), we generalize the authorization to attributes other than role and object, such as GPS coordinates.
Relevant questions are:
What role(s) do I need to perform this operation?
On what objects can I perform this operation?
What conditions need to hold for other attributes (from user, object or environment)?
Mitigations are as simple as implementing authorization checks (on both role and objects, and other attributes), or configuring privileges.
Who grants the permission may also be important. In discretionary access control (DAC), the owner of the data or the action can grant permission, whereas in mandatory access control (MAC), it is the system itself.
Finding system level security requirements¶
System level requirements come from the environment and compliance to regulations. We do not need a lot of creativity here, but it is good to check if what we have is complete. We will need to think about:
what regulations and laws apply to the software
what security requirements are dictated by the environment: - what external systems do we need to interact with? - what are their security expectations? - what security does the environment cover?
As we make more progress in our development, our requirements become more detailed, and this will yield more system level security requirements.
Security assumptions¶
[Shostack] mentions that specifying non-requirements — things that the system won’t do — is equally important to specifying the requirements. In [FFS] these are called security assumptions. Security requirements and assumptions tell users of a system whether it matches their security needs and what additional security measures they should take. Good places to list the security assumptions are the user manual and installation manual.
Summary¶
To find threats in the requirements phase, use a structured brainstorm. For threats at the business level, aks what we are afraid of and consider the different STRIDE attack types. For threats at the user interaction level, identify the desired security properties via the STRIDE requirements approach. Identify system level requirements by looking at the environment and regulations that apply to your system. Identify the potential security requirements that you will not mitigate as security assumptions.
Footnotes