Probing the Improbable: The Challenge of Identifying Risks with Low Probabilities and High Stakes

Abstract (Via Toby Ord, Rafaela Hillerbrand, Anders Sandberg @ Oxford)

Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is
suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.

Introduction (Via Toby Ord, Rafaela Hillerbrand, Anders Sandberg @ Oxford)

Large asteroid impacts are highly unlikely events.1 Nonetheless, governments spend large sums on assessing the associated risks. It is the high stakes that make these otherwise rare events worth examining. Assessing a risk involves consideration of both the stakes involved and the likelihood of the hazard occurring. If a risk threatens the lives of a great many people it is not only rational but morally imperative to examine the risk in some detail and to see what we can do to reduce it.
This paper focuses on low-probability high-stakes risks. In section 2, we show that the probability estimates in scientific analysis cannot be equated with the likelihood of these events occurring. Instead of the probability of the event occurring, scientific analysis gives the event’s probability conditioned on the given argument being sound. Though this is the case in all probability estimates, we show how it becomes crucial when the estimated probabilities are smaller than a certain threshold.

Miguel’s Favorite Quotes (Via Toby ord, Rafaela Hillerbrand, Anders Sandberg @ Oxford)

Flawed arguments are not rare. One way to estimate the frequency of major flaws in academic papers is to look at the proportion which are formally retracted after publication.

The most common way to assess the reliability of an argument is to distinguish between model and parameter uncertainty and assign reliabilities to these choices.

Calculation errors are distressingly common. There are no reliable statistics on the calculation errors made in risk assessment or, even more broadly, within scientific papers. However, there is research on errors made in some very simple calculations that performed in hospitals.

Basic Message (Via Toby Ord, Rafaela Hillerbrand, Anders Sandberg @ Oxford)

The basic message of our paper is that any scientific risk assessment is only able to give us the probability of a hazard occurring conditioned on the correctness of its main argument. The need to evaluate! the reliability of the given! argument ! in order to adequately address the risk was shown to be of particular relevance in lowprobability high-stake events.

Click Here To Read: Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes

About Miguel Barbosa

I run this site.

21. January 2010 by Miguel Barbosa
Categories: Curated Readings, Risk & Uncertainty | Leave a comment

Leave a Reply

Required fields are marked *