Ways to contributeContribute to the RE Magazine Search Authors
6
Comments
Issues
Close
Close
1Write an articleMore Information
2Place an advertisementMore Information
3Sponsor the magazineMore Information

Building in security instead of testing it in

Eliciting security requirements needs a different process

2 Comments
Written by Edward van Deursen, Jan Jaap Cannegieter
Estimated Reading Time: 14 minutes, 27 seconds
Advertise with usAdvertisement
Best for System & Software Development

objectiF RPM – Requirements Engineering and Agile Development

www.microTOOL.de
Best for System & Software Development

objectiF RPM – Requirements Engineering and Agile Development

www.microTOOL.de

There are a numerous techniques for eliciting and documenting functional requirements. We all know interviews, workshops, questionnaires and studying documentation. And haven’t we all stood in front of a whiteboard drawing the first version of a use case, activity diagram or an entity relation diagram? And let’s be honest, with these techniques we are very well able to elicit and document requirements to build in quality from the start in most projects. But somehow we often fail to elicit and document security requirements.

The consequence is that security is not built into the system. At best the security issues are found during testing and this is always more time consuming and more expensive compared with building security in from the start. Or even worse, the system arrives in production with security issues. As a consequence later bolted-on security is not naturally integrated with the rest of the functionality. This leads to a sub-optimal configuration and software which will affect maintenance costs in a negative way.

An example of a requirement in a machine-machine interface within a Telecom company: “The system shall provide the billing system with the ability to provide telephone call information”. Compare this with the following requirement: “The system shall provide the trusted billing system with the ability to request telephone call information”. Adding one single word “trusted” means a totally different approach to designing and developing this interface. It also leads to a range of requirements on system and process levels. A system administrator is then not allowed to connect any system on this interface. Adding this single word to the requirement has prevented an untrusted marketing system being able to use the interface later in production to gather information about customer behaviour which does not comply with the privacy policy.

Most security requirements we see are too high level. For example: “The system shall comply with OWASP top 10 and ISO/IEC 27002.” First of all, you definitely don’t want the OWASP top 10 of vulnerabilities [Reference 10] in your system! And ISO/IEC 27002 is a guideline for information security including a policy, screening of personnel and physical and human security. That’s probably too much.

On the other hand we sometimes see too detailed technical information, though still missing the most important points, such as: “The system shall include a firewall”. A firewall is a solution which still needs more requirements to be properly configured. And a phishing e-mail can still pass a firewall. In this case the product owner wants something other than a firewall. With the expression “firewall” what is really wanted is a range of security measures and requirements to prevent leaking of confidential information. In information security there is a principle of “defence-in-depth”, which means that the security is provided by several layers. Some layers are complementary while others overlap. Security measures can prevent, detect or correct an incident, or are for recovery after an incident. Proper risk management including threat modelling and an inventory of valuable assets is, most of the time, not done.

Why is it so hard to elicit and document security requirements? The essence of the problem is that functional requirements are in most cases positive requirements. They state what people DO want, not what they DON’T want. And people have the natural tendency to say what they want but often don’t come up with things they don’t want. The second reason is that to elicit security requirements you have to look at your product in a different way: how can people use the product in a non-intended way to reach other goals? In both cases requirements engineers and stakeholders have to look differently at the system to be developed: in a more negative, critical and sometimes destructive way. The well-known elicitation techniques don’t work for security requirements in most cases. We keep asking what the stakeholders want, how the system should be used and what the intended use of the system is. And with these practices the security requirements will be incomplete, at best.

Asking the obvious but not asked questions

There is another reason to focus more on security requirements. New European legislation is in progress [References 1, 2] regarding the obligation to report breaches. As a consequence a system must detect when it is compromised and the organisation must respond to these alerts. So, while eliciting requirements we should ask questions like:

  • Do you want a hacker to be able to spoof your web application?
  • Do you want a user to change the price accidentally?
  • Do you want an administrator to cover his mistakes by changing the log files?
  • Do you want data to be changed by a disgruntled employee?
  • Do you want the one and only administrator login account to be blocked accidentally or on purpose by a hacker?
  • Do you want a customer to change his own privileges in the system?

The answer of a stakeholder to these questions is most probably: “Of course not!” According to the KANO model [References 3, 4] these kinds of unwanted ‘features’ lead to stakeholder dissatisfaction. These unwanted requirements are so obvious that we don’t ask for them and don’t write them down. This leaves it up to the developer to decide on security features. Most of the time these security features don’t meet the security requirements of the stakeholders. Security is a basic need and must be made explicit.

Other kind of use cases

A good technique for eliciting and documenting security requirements is the use of abuse cases, misuse cases and confuse cases [References 5-9]. These use cases make unexpected, unintended (mistakes by confused users) and intended (abuse/misuse) behaviour explicit. Although these use cases aren’t new and have a lot of benefits, they are still not commonly used.

An abuse case describes actions an outsider can perform to breach a system, to steal valuable assets and information, damage a system or bring a system down. Even sabotage, espionage and other criminal actions are included. The actor can be anyone, any threat agent: a criminal, a terrorist or even a nation state. An example: someone executes a Denial-of-Service (DoS) attack on your web shop.

The question is not if your system will be breached, the question is when it will be breached. When most people think of security they think that threats are coming from outside the organization. As we know, insiders can also harm the organization or damage the system.

The misuse case is a bit different. This is a legal user who is using the system on purpose in an inappropriate way. The person is misusing his privileges or misusing functionality of the system. And most of the time this is hard to detect because the person knows how the system works and can cover his tracks.

The confuse case describes a legal user (insider) performing an inappropriate task that is causing damage. The user is performing the task without knowing the (negative) consequence. In a way the system isn’t foolproof.

Type Actors and actions Example
Use case Insiders doing appropriate tasks Customer is paying his bill
Abuse case Outsiders trying to breach the system Criminal executes phishing attack to gather credentials
Misuse case Insiders doing inappropriate tasks intentionally Administrator stealing log files with credit card info and social security numbers
Confuse case Insider doing inappropriate tasks unintentionally User reading malicious e-mail on a PC without antivirus software or malware protection

Figure 1: The use case family


More in detail

Let’s discuss the differences between use cases, abuse cases, misuse cases and confuse cases by means of the following example. Suppose we want to sell products on the internet (business need) and therefore we create a web shop (solution).

Figure 2: Example of Online Shop System use case diagram
Figure 2: Example of Online Shop System use case diagram

First we will discuss the use case, and then we will add abuse, misuse and confuse cases to show the differences.

A normal description for the use case ‘Review Product’ could be:

  1. Customer enters login information
  2. System displays product menu
  3. Customer selects product
  4. System displays product details
  5. Customer enters review
  6. Customer logs out

Example of an abuse case scenario

An example of an abuse case is how a hacker adds malicious content to our web shop. This can be done in several ways, for example by adding a link to a malicious website or adding an infected picture.

Basic flow of Adding Malicious Content

  1. Hacker creates a malicious website
  2. Hacker creates a fake or temporary email address
  3. Hacker creates an account
  4. System sends activation link to email address
  5. Hacker activates account
  6. Hacker enters login information
  7. System displays product menu
  8. Hacker selects most sold product
  9. System displays product information
  10. Hacker adds review with link to malicious web site
  11. Hacker logs out

This abuse case is followed by the normal use case Browse Products where a visitor or customer browses to the most sold product and clicks on the malicious link and gets infected. The same can happen when the administrator clicks on the link in the Maintain Product Catalogue use case.

Example of a misuse case scenario

Similarly to the abuse case Add Malicious Content an administrator can add content in a different way and change log files to cover his actions. For example the following misuse case of Add Malicious Content:

  1. Administrator creates a malicious picture of a product
  2. Administrator enters login information
  3. System displays product maintenance menu
  4. Administrator selects product
  5. System displays product details
  6. Administrator uploads malicious picture to product
  7. System logs upload in log file
  8. Administrator logs out
  9. Administrator changes the log file by deleting the upload entry in the log.
  10. Administrator changes date/time of log file.

When an administrator accidentally uploads a malicious picture from the internet, she will probably follow steps 2 to 7.

Example of an confuse case scenario

Confuse cases will most of the time involve alternative steps within the normal use cases. To start finding security flaws in a system it is good practice to look at all conditions. In our opinion most functional security issues are caused by not defining explicitly what must happen when a condition is not true or an error is not failing securely, leaving the system in a vulnerable state. For example, in an If…Then…Else construction the Else situation is not defined and we rely on the programmer to solve this issue.

An example of a confuse case. Part of a Basic flow Change Order:

  1. Sales staff enters login information
  2. System displays order menu
  3. Sales staff selects order
  4. System displays order details
  5. Sales staff selects option ‘delete all orders’
  6. System logs deleted records in log file
  7. System deletes all orders from database
  8. Sales staff logs out

In this case the Sales staff wanted to delete all orders of the customer and deleted all orders of all customers, ending up with an empty order database. Confuse cases are about input validation, error handling and misinterpretation of legal actions. Even adding a step ‘Systems asks for confirmation’ in the confuse case above wouldn’t help when he doesn’t understand the consequence of the delete function properly.

For security requirements we look from different perspectives to find out what should happen and what shouldn’t happen to valuable assets and processes. Possible perspectives are:

  • User point of view
  • Process (owner) point of view
  • Data (owner) point of view (data in transit and data in rest)

Abuser stories, misuser stories and confuse stories

For user stories we can use this method as well, by writing down what an actor wants or doesn’t want to happen and making explicit what valuable assets and processes need to be protected from abusers, misusers and confusers.

You can write in either a positive or negative way, for example:

  • As a customer I want to have my credentials protected
  • As a customer I don’t want my credentials to be public
  • As an administrator I want to detect abuse of the system
  • As an administrator I don’t want the system to be misused
  • As a controller I want data to be protected from unintended manipulation
  • As a controller I don’t want to have data manipulated by users unintendedly

The Holy Grail

Is the technique of making abuse, misuse and confuse cases the Holy Grail to get a secure system? Can it replace or is it better than technical analysis, a code review or a penetration test? No, it is not. Technical analysis is a good start when building a system. During development secure coding principles and coding standards must be used. A code review, security tests and finally a penetration test will complete the development process. And when the system is live the security operations must be in place to keep the system secure. Even when the system is taken out of production security measures must be taken. Every action is part of an integrated approach to achieve a secure system: they are complementary. But a secure system starts with a complete and good set of requirements. Abuse, misuse and confuse cases help you to elicit security requirements, at the start of a project and during operation. The elicitation of security requirements by developing abuse cases, misuse cases and confuse cases must be integrated in our way of working as requirements engineers. It is another tool to add in our toolbox. And most importantly: the tool needs to be used!

Eliciting abuse, misuse and confuse cases

One last issue concerning the use of abuse cases, misuse cases and confuse cases is the timing and setting of the elicitation. Functional requirements and security requirements should not be elicited in the same session or interview. Before you can elicit abuse, misuse and confuse cases you need to have a global view of the solution and the functional requirements. Without this input the security requirements can’t be made explicit. Secondly, the elicitation of security requirements requires a completely different mind-set. Instead of answering questions like what the system should do, you have to answer questions like what the system shouldn’t do and should not be capable of. That requires a completely different way of thinking. Thirdly you need some extra stakeholders to be involved in eliciting security requirements, such as security experts and people with no knowledge of the processes and system to be developed. In practice it works pretty well to elicit security abuse cases, misuse cases and confuse cases in separate sessions or sets of interviews, after you have elicited the functional requirements. In some cases the elicitation of security requirements can be combined with the elicitation of other non-functional requirements.

With the use of abuse cases, misuse cases and confuse cases, security and security-related requirements can be elicited and documented very effectively. It is difficult to guarantee completeness, but combined with an analysis of known vulnerabilities like the OWASP top 10 [References 10, 11, 12] and with for example Microsoft’s STRIDE threat model [Reference 13] security can be built in instead of tested in. And this always leads to better and cheaper results.

Background information

If you want to know more about abuse, misuse and confuse cases, you can find information about the subject in the used references.

In addition to the described technique of abuse, misuse and confuse cases there are a number of other techniques. It’s impossible to describe them all in this article. Another interesting method is described in the Framework Secure Software [Reference 14] where a distinction is made between the requirements of the system itself (security requirements) and requirements and pre-conditions to the environment/users (security assumptions). In this way the requirements to build in the system are made explicit and the owner of the system knows what more is needed to make the system secure.

Here are some more references to other techniques for eliciting security requirements:

  • Mellado, D., Blanco, C., Sánchez, L. E., & Fernández-Medina, E. (2010). A systematic review of security requirements engineering. Computer Standards & Interfaces, 32(4), 153-165.
  • Iankoulova, I., & Daneva, M. (2012, May). Cloud computing security requirements: A systematic review. In Research Challenges in Information Science (RCIS), 2012 Sixth International Conference on (pp. 1-7). IEEE.
  • Souag, A., Salinesi, C., & Comyn-Wattiau, I. (2012, January). Ontologies for security requirements: A literature survey and classification. In Advanced Information Systems Engineering Workshops (pp. 61-69). Springer Berlin Heidelberg.

Used references



Give feedback


Feedback-

From Annick Rimbod-Pethiod

Subject: Techniques for eliciting and documenting security requirements
Very interesting approach to elicit security requirements: considering abuse case, misuse case and confuse, additionnally to traditional "use case" should be an efficient way to really address security area during need analysis. Thanks for sharing !

From Edward van Deursen and Jan Jaap Cannegieter – Author

Thanks Annick!

Sponsors