This section provides an overview of selected methods for use of expert judgement in uncertainty analysis. Details of selected methods are reviewed in Section
11.3 and Annexes
B.8 and
B.9.
All scientific assessment involves the use of expert judgement (Section
5.9). The Scientific Committee stresses that where suitable data are available, this should be used in preference to relying solely on expert judgement. When data are strong, uncertainty may be quantified by statistical analysis, and any additional extrapolation or uncertainty addressed by ‘minimal assessment’ (EFSA, 2014a (
link)), or collectively as part of the assessment of overall uncertainty (Section
14). When data are weak or diverse, it may be better to quantify uncertainty by expert judgement, supported by consideration of the data.
Expert judgement is subject to a variety of psychological biases (Section
5.9).
Formal approaches for ‘expert knowledge elicitation’ (EKE) have been developed to counter these biases and to manage the sharing and aggregation of judgements between experts. EFSA has published guidance on the application of these approaches to eliciting judgements for quantitative parameters (EFSA, 2014a). Some parts of EFSA's guidance, such as the approaches to identification and selection of experts, are also applicable to qualitative elicitation, but other parts including the detailed elicitation protocols are not. Methods have been described for the use of structured workshops to elicit qualitative judgements in the NUSAP approach (e.g. van der Sluijs et al., 2005 (
link) and 2005 (
link); Bouwknegt and Havelaar, 2015 ) and these could also be adapted for use with other qualitative methods.
The detailed protocols in EFSA (2014a (
link)) can be applied to judgements about uncertain variables, as well as parameters, if the questions are framed appropriately (e.g. eliciting judgements on the median and the ratio of a higher quantile to the median). EFSA (2014a (
link)) does not address other types of judgements needed in EFSA assessments, including prioritising uncertainties and judgements about dependencies, model uncertainty, categorical questions, approximate probabilities or probability bounds. More guidance on these topics, and on the elicitation of uncertain variables, would be desirable in future.
Formal elicitation requires significant time and resources, so it is not feasible to apply it to every source of uncertainty affecting an assessment. This is recognised in the EFSA (2014a (
link)) guidance, which includes an approach for prioritising parameters for formal EKE and ‘minimal assessment’ for more approximate elicitation of less important parameters. Therefore, in the present guidance, the Scientific Committee describes an additional, intermediate process for
‘semi‐formal’ expert elicitation (Section
11.3.1 and Annex
B.8).
It is important also to recognise that generally, further scientific judgements will be made, usually by a Working Group of experts preparing the assessment: these are referred to in this document as judgements by
‘expert group judgement’. Normal Working Group procedures include formal processes for selecting relevant experts, and for the conduct, recording and review of discussions. These processes address some of the principles for EKE. Chairs of Working Groups should be aware of the potential for psychological biases, mentioned above, and seek to mitigate them when managing the discussion (e.g. discuss ranges before central estimates, encourage consideration of alternative views).
In practice, there is not a dichotomy between more and less formal approaches to EKE, but rather a continuum. Individual EKE exercises should be conducted at the level of formality appropriate to the needs of the assessment, considering the importance of the assessment, the potential impact of the uncertainty on decision‐making, and the time and resources available.