Quantitative risk assessment (QRA) software
Quantitative risk assessment (QRA) software and methodologies give quantitative estimates of risks, given the parameters defining them. They are used in the financial sector, the chemical process industry, and other areas.
In financial terms, quantitative risk assessments include a calculation of the single loss expectancy of monetary value of an asset.
In the chemical process and petrochemical industries a QRA is primarily concerned with determining the potential loss of life (PLL) caused by undesired events. Specialist software can be used to model the effects of such an event, and to help calculate the potential loss of life. Some organisations use the risk outputs to assess the implied cost to avert a fatality (ICAF) which can be used to set quantified criteria for what is an unacceptable risk and what is tolerable.
For the explosives industry, QRA can be used for many explosive risk applications. It is especially useful for site risk analysis when reliance on quantity distance (QD) tables is not feasible.
Some of the QRA software models described above must be used in isolation: for example the results from a consequence model cannot be used directly in a risk model. Other QRA software programs link different calculation modules together automatically to facilitate the process. Some of the software is proprietary and can only be used within certain organisations.
Due to the large amount of data processing required by QRA calculations, the usual approach has been to use two-dimensional ellipses to represent hazard zones such as the area around an explosion which poses a 10% chance of fatality. Similarly, a pragmatic approach is used in the simplification of dispersion results. Typically a flat terrain, unobstructed world is used to determine the behaviour of a dispersing cloud and/or a vaporizing pool. This presents problems when the effects of non-flat terrain or the complex geometry of process plants would no doubt affect the behaviour of a dispersing cloud. Though they have limitations, the 2D hazard zone and simplified approach to 3D dispersion modelling allow the handling of large volumes of risk results with known assumptions to assist in decision-making. The trade-off shifts as computer processing power increases.
The modeling of the consequences of hazardous events in a true 3D manner may require a different approach, for example using a computational fluid dynamics method to study cloud dispersion over hilly terrain. The creation of CFD models requires significantly more investment of time on the part of the modeling analyst (because of the increased complexity of the modeling), which may not be justified in all cases.
One major limitation of QRA in the safety field is that it is focussed primarily on the loss of containment of hazardous fluids and what happens when they are released. This renders QRA somewhat unworkable in hazardous industries that do not focus on fluid containment yet are still subject to catastrophic events (e.g. aviation, pharmaceuticals, mining, water treatment, etc.) This has led to the development of a risk process that draws on the experience of organisations and their employees to produce risk assessments that produce potential loss of life (PLL) outputs without fault and event tree modelling. This process is probably most commonly known by the name SQRA which was the first methodology to enter the marketplace in the late 1990s but is perhaps more accurately described by the term Experience-based Quantification (EBQ). Today there is a choice of software with which to undertake this methodology and it has been used extensively in the mining industry on a global basis.
In an effort to be more fair and to avoid adding to already high imprisonment rates in the US, courts across America have started using quantitative risk assessment software when trying to make decisions about releasing people on bail and sentencing, which are based on their history and other attributes. It analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years, and found that only 61% of those deemed high risk actually committed additional crimes during that period and that African-American defendants were far more likely to be given high scores that white defendants. These results are part of larger questions being raised in the field of machine ethics with regard to the risks of perpetuating patterns of discrimination via the use of big data and machine learning across many fields.