Methodologies have been proposed to open up the box of a given quantification to explore context, purpose, motivation and stakes behind a given number, especially when resulting from some kind of modelling process, as well as to provide pedigrees to discuss - in a participatory fashion, the relative merit of different proposed quantifications.
Among those, NUSAP is a notational system for participatory analysis of the quality of quantification. It is based on five categories for characterizing any quantitative statement: Numeral, Unit, Spread, Assessment and Pedigree, where Numeral is in general an ordinary number, Unit refers to its units, and spread is an assessment of the error usually based on the statistical characteristics of the data. The next two categories extend the above to produce judgment on the quality of the quantification and of the team producing it. Assessment is a summary of salient qualitative judgements about the the number, and may involve the use of terms such as 'conservative' or 'optimistic'. Pedigree is an reasoned judgment about the mode of production and of anticipated use of the information. Both assessment and pedigree are meaningful in the context of participatory analysis.
NUSAP was introduced by Silvio Funtowicz and Jerome Ravetz in the 1990 book Uncertainty and Quality in Science for Policy[1] and extensively applied by several investigators, among which Jeroen van der Sluijs and co-workers have led the most relevant applications[2], [3]. NUSAP is especially tailored for scientific work at the science-policy interface, and has been notably used in Europe by the European Food Safety Agency EFSA. Recent applications of NUSAP are to climate science, hydrology, medical research and risk assessment.
Sensitivity auditing (SAUD) extends sensitivity analysis (SA) by zooming out from the mere technical mathematical or statistical dimensions of a model – and reflecting on its implicit or explicit assumptions, interests, and values[4]. SAUD takes models to be more than the translation of natural or human laws into lines of computer program or algorithms. Models are taken instead to reflect different visions of the world, of the nature of the problem tackled, and of the preferred end-in-sight. Sensitivity auditing is inspired by the epistemologies of post-normal science, for which uncertainty, quality, and values belong together in the use of science to tackle practical problems in society, the environment, and human health. Sensitivity auditing assumes settings where uncertainties, interests, and values are at stake in the taking of decisions based on different forms of quantification [5]. It is explicitly participative, and offers to its practitioners solid basis for both constructing and deconstructing cases at the science-policy interface.
The proposed session aims to demonstrate the merit of NUSAP and SAUD based on a series of worked examples produced by practitioners directly involved in the development and application of these practices.
REFERENCES
[1] S. Funtowicz and J. R. Ravetz, Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer, 1990. doi: 10.1007/978-94-009-0621-1_3.
[2] J. P. van der Sluijs, M. Craye, S. Funtowicz, P. Kloprogge, J. R. Ravetz, and J. Risbey, “Combining Quantitative and Qualitative Measures of Uncertainty in Model-Based Environmental Assessment: The NUSAP System,” Risk Analysis, vol. 25, no. 2, pp. 481–492, May 2005.
[3] J. P. van der Sluijs, J. S. Risbey, and J. R. Ravetz, “Uncertainty Assessment of Voc Emissions from Paint in the Netherlands Using the Nusap System,” Environmental Monitoring and Assessment, vol. 105, no. 1–3, pp. 229–259, Jun. 2005, doi: 10.1007/s10661-005-3697-7.
[4] A. Saltelli, Â. Guimaraes Pereira, J. P. van der Sluijs, and S. Funtowicz, “What do I make of your latinorumc Sensitivity auditing of mathematical modelling,” International Journal of Foresight and Innovation Policy, vol. 9, no. 2/3/4, pp. 213–234, 2013, doi: 10.1504/IJFIP.2013.058610.
[5] S. Lo Piano, R. Sheikholeslami, A. Puy, and A. Saltelli, “Sensitivity auditing: an important ingredient in the evaluation of models and metrics,” Submitted, 2021.