printUse ctrl + p to print the page

Critical incident technique

Aim of the tool
Gathering facts (incidents) from domain experts or less experienced users of the existing system to gain knowledge of how to improve the performance of the individuals involved.

When to use it?
This tool is useful at the beginning or halfway stage of an intervention process, or at the end for evaluation purposes.

How difficult is it to use it?
Easy – moderate – for experienced users/facilitators

Tool for thought or tool for action?
Tool for thought

Flexible method that can be used to improve multi-user systems. Data is collected from the respondent's perspective and in his or her own words; it does not force the respondents into any given framework.

Identifies rare events that might be missed by other methods which only focus on common and everyday events. Useful when problems occur but the cause and severity are not known. Inexpensive and provides rich information. Emphasizes the features that will make a system particularly vulnerable and can bring major benefits (e.g. safety).

Issues to be aware of

  • A first problem comes from the type of reported incidents. The critical incident technique will rely on events being remembered by users and will also require the accurate and truthful reporting of them. As a result of this reliance on memory, critical incidents may be imprecise or may even go unreported.
  • The method has a built-in bias towards incidents that happened recently, since these are easier to recall.
  • Respondents may not be accustomed to, or willing to take, the time to tell (or write) a complete story when describing a critical incident. Since this method is based on incidents, it does not say anything about the everyday situation and so is not very representative.
  • Interviewees are assured of anonymity so that they can describe how a project operates or what another person did without being disclosed as the source of information.

Description of the tool
Critical incident technique is a method of gathering facts (incidents) from domain experts or less experienced users of the existing system to gain knowledge of how to improve the performance of the individuals involved. Critical incidents are short descriptions of experiences that have particular meaning to the practitioner. In this context, the word ‘critical’ means ‘of crucial importance’. These experiences can be used as the basis for the critical incident technique – a tool that can be used systematically to examine, reflect and learn from positive and negative incidents.

End users are asked to identify specific incidents which they experienced personally and which had an important effect on the final outcome. The emphasis is on incidents rather than vague opinions. The context of the incident may also be elicited. Data from many users is collected and analysed.

The critical incident technique is used to look for the cause of human-system (or product) problems to minimize loss. The investigator looks for information on the performance of activities (e.g. project tasks) and the user-system interface. The investigator may focus on a particular incident or set of incidents which caused serious loss. Critical events are recorded and stored in a database or on a spreadsheet. Analysis may show how clusters of difficulties are related to a certain aspect of the system or human practice. Investigators then develop possible explanations for the source of the difficulty.

The method is useful when problems occur within a system but their cause (and sometimes their severity) is not known. However the method also takes account of helpful events that may have prevented loss or countered errors. The method generates a list of good and bad behaviours which can then be used for performance appraisal.

The critical incident technique was first devised and used in a scientific study almost half a century ago (Flanagan, 1954). It was further developed by Chell (1998), through an unstructured interview to capture the thought processes, the frames of reference, and the feelings about an incident that have meaning for the respondent. In the interview, respondents are required to give an account of what the incident means for them in relation to their life situation.

Objective: Evaluate how measures for knowledge sharing among different development projects are implemented in an intergovernmental development agency.

Method: Telephone/Skype interviews using the critical incident technique.

Antecedent: Collaborators of the international agency have lived through many situations of relative “isolation”, i.e. not having immediate access to knowledge, experience and support of peers in situations of increased stress and need.

Action in the incident: A high pressure situation happens to a colleague, where vital – personal and institutional – interests require, that he/ she has quick access to all kinds of knowledge available in the organisation; yet the time-bound constraints don’t allow for extensive search or selection procedures, so the knowledge provided has to be selective, relevant and focused.

Reaction: The person has to mobilise all possible channels and sources of knowledge, especially “instant” ways of procurement (no long research efforts) and those relying on the “selfless” assistance of peers.

For the full example:

Steps needed when using the tool
You can do the critical inident technique by interviewimg or by getting users to complete a paper form. The user is requested to follow the three stages described below:

  • focus on an incident which had a strong positive influence on the result of the interaction
  • describe the incident and what led up to it
  • describe how the incident helped in the successful completion of the interaction.

It is usual to request two or three such incidents, but at least one should be elicited. When this has been done, the procedure is repeated and this time the user is asked to focus on incidents which had a strong negative influence on the result of the interaction and to follow the above formula to place the incidents in context. User responses will show some variation in the number of positive and negative incidents.

It is usual to start with a positive incident in order to set a constructive tone with the user.

If context is well understood, or time is short, the method may be stripped down to the basics and the user will only be required to do the first part i.e. focus on describing the positive and negative critical incidents.

In an interview situation the user can be corrected if they attempt to reply with generalities, not tying themselves to a specific incident. This is more difficult to control if you are using a written form, so ensure that the introductory instructions are clear.

When you have gathered a sufficient quantity of data, you should be able to categorise the incidents and produce a relative importance weighting for each; some incidents will happen frequently and some less frequently.

For a summative evaluation, you should collect enough critical incidents which will enable you to make statements such as "x percent of the users found feature y in context z was helpful/ unhelpful."

For a formative evaluation, you should collect enough contextual data around each incident so that the designers can place the critical incidents in scenarios or use cases.

Sources and further reading