The Vastness of Values in Health Economics Modelling
Dear Readers:
This blog post is adapted from a presentation I gave to the American Philosophical Association on February 24, 2021 as part of a panel on Science, Policy, and Epidemiology. It includes some general information on health economics, as well as my own views on why social values are always relevant in the modelling process. Everything here is open for question and discussion! Thanks for reading- Stephanie Harvard
Introduction: Overview, Perspective & Sources
Hi everyone, thank you for having me today. My name is Stephanie and I'm a fellow at UBC. My interest is in health economics, but more specifically in talking about health economics with the public and understanding the social and ethical value judgments involved in building health economics models. Hopefully what I have to say will be a useful introduction to the topic of science, policy, and epidemiology. When I talk about ‘models’ today, I'm referring quite generally to the sort of computer models used in the health sciences, so both statistical models and simulation models that have a time component. At one point, I'll talk specifically about decision models in health economics.
First, I'll review some very basic principles of public health, and health economics in particular. Then I’ll tell you about a qualitative study that I did that applied the philosophy on values in science to health economics modelling. This shaped my reasoning about why there are social value judgments throughout the modelling process. Then I'll make some more general comments about the reasons it's hard to simply "follow the science" to inform public health policy. Last, I want to say why I think it would be good to have more interaction between philosophers and health economists, and better public conversations about values in health-policy-oriented science.
Just so you know who you’re hearing from today: I'm not a high up person in any domain, but I’ve worked in public health since 2005 and I did a PhD focused on economic evaluation in health, specifically on economic evaluations of treatment recommendations. So I've spent a lot of time thinking about the costs and consequences of specific types of health advice. My sources for today are standard references in health economics1,2 and some recent commentaries on the pandemic by health economists.3-5. I'll also be drawing on numerous philosophers who write about values in science.6
The ‘101’
Public health is concerned with health at the population level, so it's sensitive to the difference between what will benefit an individual and what will benefit a group. The research often focuses on the effect of 'health interventions', which can take a variety of forms, and it looks at a range of different 'health outcomes'- phenomena or constructs of various levels of complexity. And, very generally, the purpose of doing this is to inform health policy decisions.
So, let's take an example of a health policy decision: who exactly should get a drug to make sure they don't get bat rabies? Back in 2007, one population that drew some attention was people who wake up from a deep sleep, find out there's a bat in their bedroom, and can't confirm they didn't touch it.7,8
This concern arose because some data had shown that only a third of people with bat rabies had an obvious bat bite. This led people to think that possibly a bat bite could be so minor that it would escape a person's attention while they were sleeping. So it was recommended that people with "bedroom bat exposure without recognized contact" should get Post-Exposure Prophylaxis (PEP). Now, this seemed to me to be a pretty reasonable policy.
However, a group of public health researchers thought to consider certain other information, including how frequently people wake up to find a bat in their bedroom.7,8 So they did a telephone survey of people in Quebec and based on their results, they concluded this might happen to roughly one out of 1000 people every year. They also looked at epidemiological data, which showed that between 1990 and 2007, only 2 people with bat rabies got it specifically from "bedroom bat exposure without recognized contact". So they estimated that the incidence of this was roughly ~1 case per 2.7 billion person-years. This meant that to prevent 1 case of bat rabies due to it, you would have to give PEP to 2.7 million people. Furthermore, to do that would cost almost 2 billion dollars.
So this research team concluded that the PEP guidelines should be revisited. Specifically, they noted that this bedroom recommendation effectively implied that no level of risk was acceptable, and that this would come at a very high cost.7 I think this example shows that, sometimes, our views about a health policy can change once we consider both epidemiological data and cost data alongside each other.
This insight is the foundation of health economics. Health economics is the branch of public health science that asks whether a given health intervention is worth doing, considering the other things we could do with the same resources.
It asks this question under the assumption that resources are scarce: that spending comes at an ‘opportunity cost’. This is a just way of saying that if you spend a dollar on something, you've lost the opportunity to spend it on something else. That's the sacrifice.
Not all public health models include costs, but they all inform decisions with health and economic consequences. So health economics is sometimes viewed as a cold or crude branch of public health science, but one can also view public health models that don't explicitly include costs as incomplete and even problematically incomplete.
So the ‘101’ of health economics involves couple of core concepts. One is opportunity cost. Another is ‘efficiency’: minimizing input for a given output, and producing cost-effective outputs at an 'optimal' rate and allocating them in an 'optimal' way, whatever that means.1,2
Another one is ‘marginal’ or ‘incremental analysis’: here, the focus is on the costs and consequences of having ‘a little bit more or a little bit less’ 1 of something in the health system, and on the optimal balance of health programs. One common way of comparing two health interventions is the incremental cost effectiveness ratio, or “ICER”, which is the difference in costs over the difference in effects.
In the context of the pandemic, health economists have advanced certain critiques of public health responses, which reflect their thinking. Some common themes are: first, we should openly acknowledge that there are trade-offs being made in any pandemic policy; second, we should do marginal analysis to quantify these trade-offs. For example, the trade-offs involved in imposing 'lockdowns' at different levels of contagion. They've also pointed out that the most common ways of evaluating health interventions, using outcome measures like, ‘Quality Adjusted Life Years’, have not been applied in the pandemic.
For example, Chilton and others write that "Evaluation tools are there to aid decision makers by providing clear information that highlights the nature of the trade-offs being made.... As well as the already noted fiscal costs, the opportunity costs will include social isolation, deepening inequalities, cancer treatments displaced, elective surgeries cancelled, and so forth."3
Reddy that "An ‘optimal’ programme of non-pharmaceutical interventions would be arrived at by weighing the marginal health benefits and costs entailed by varying each dimension of their implementation...taking note of both direct and indirect consequences."4
And Donaldson and Mitton that "Unsurprisingly, a ‘great big marginal analysis’, of the sort currently required...has never, to our knowledge, been undertaken."4 This speaks to the fact that measuring costs and benefits in the context of a global problem like this is an absolutely daunting task.
Already, it should be clear that health economics involves some obvious social value judgment- notably, deciding what to define as a benefit and what distribution of benefits is desirable. A key question is whether a health benefit is of equal value no matter who gets it, or whether health benefits should be valued more highly among certain groups.
Two dominant approaches in health economics are referred to as ‘welfarist’ and ‘extra-welfarist’.1,9 On the welfarist approach the desired benefit is the utility associated with the consumption of goods and services, since health services are regarded like any other goods and services - so benefits are expressed in terms of ‘willingness to pay’. On the extra-welfarist approach, the desired benefit is health itself: health is conceived as its own thing and valued in its own right. In practice, both of these schools have ignored distributional issues and just focused on maximizing total benefits regardless of who gets them.
Extra-welfarism, specifically, raises the conundrum of how exactly to define and value ‘health’. This requires other numerous value judgments like 1) what categories of functioning should be considered as parts of health, 2) what weights should be applied to different health states, and 3) who should decide, and who should decide who decides?2
In general, a number of different outcomes can be used as health indicators. One of these is deaths- and certainly deaths are very often desired information. However, deaths are not informative in all contexts. For example, in end-of-life care or in physiotherapy, the outcome of interest would not typically be death. One alternative that gives additional information is life years gained, but this also leaves out other information, like quality of life.
There's also a desire in health economics to be able to compare cost-effectiveness across treatments for different conditions. For example, if one drug reduces swollen joints, and another reduces lung attacks, health economists want a way to compare their value and help choose which one to invest in.
This has led to the development of various standardized health outcome measures, among which a very influential one is the Quality Adjusted Life Year.
To calculate a QALY you need both data on length of life and quality of life, but more specifically the health-related 'utility' or preference associated with a health state. This type of utility is expressed on a scale of 0-1, where 0 implies death and 1 implies full health. And it's measured indirectly with generic quality of life questionnaires. The two most common ones that I know of are the EQ5D and the SF-36, each of which considers slightly different things as dimensions of health.
(Now, I really wanted to include in this presentation the details of how quality of life data are converted into utility scores- but I was told that this would be burdensome to listen to. However, if you want to know what's under the hood of the QALY, please Google the "Time Trade Off" and the "Standard Gamble".)
To summarize, to get a QALY, utility scores are multiplied by data on length of life. So, if my treatment extends my life for 1 year, but my subsequent health state has a utility value of 0.5, I gain half a QALY.
In practice, QALYs go into ICER calculations. This gives average cost per additional QALY produced by a new treatment or health program. This information is used to help decide whether the new treatment should be funded. In practice, it's very common to assess the value of health programs by comparing the ICER to a threshold value that represents an acceptable cost per additional QALY. This raises the question: how much is an extra QALY worth? Furthermore, in practice, thresholds are just a guide, and decision-makers can always make exceptions. When should we make an exception?
One idea is that we have an obligation- or at least a motivation- to rescue certain people regardless of cost. For example, when we can identify specific individuals in immediate danger, we might not consider cost. This is referred to as the ‘Rule of Rescue’.10 At the same time, there’s a tension between cost-effectiveness and the Rule of Rescue. Between the threshold approach and the Rule of Rescue, it's clear that we can end up with health policies that result in increased expenditures without increases in aggregate population health.
That brings me to the end of my health economics ‘101’. What I hope you took away from it was that the discipline centres around social value judgments. Even if you didn't Google the Time Trade-Off or Standard Gamble, you know that health economists have a big role in defining what a health benefit is and influencing how they're distributed. And we haven't gotten into the specifics of modelling. So that's what I'll focus on now.
Values in Health Economics Modelling
A long-held ideal is that we should restrict the influence of social values to certain stages in science. So, some people have the idea that even though there are obvious value judgments in health policy-making, the modelling process that informs health policy can be value-free. Numerous philosophers have argued that this is not possible. That said, philosophers have each expressed their reasons in slightly different terms, and not with reference to health economics modelling.
In 2019, I did a study where I interviewed 22 health economics modellers on this topic.5 First, I briefed them about different arguments that philosophers make about why there are value judgments throughout the scientific process. Then I asked them if they had any examples from their modelling practice that supported those arguments. So, in the paper, examples from modelling practice are grouped by philosophical arguments about values in science. But one problem with doing that way was that philosophers' arguments are not totally distinct from one another, and articulating the overlap between them is difficult. So, I've been focused on how to express, particularly to the public, why there are social value judgments throughout health economics modelling. I'm going to say it here in 3 categories: what to model, how to model it, and what to conclude. In this context, I'm using examples from what health economists call decision-analytic modelling, which are simulation models that incorporate data from multiple sources and show things over time.
What to Model?
‘What to model’ refers to decisions about what to include in the model. In my view, these decisions are all straightforward extensions of the research question, which effectively everyone agrees involves a social value judgment. In health economics, these are sometimes quite obvious moral judgments about what options are acceptable or even possible to consider. In the context of the pandemic, I imagine we can all think of different strategies that only some people think are even okay to consider.
First we have to specify in a very high level of detail what the alternative strategies are. Here, a lot of moral micro-questions pop up. For example, it should be clear that "lockdown" is not one clear strategy that you could model the costs and effects of. You would have to grapple with specific questions about who exactly will be restricted from doing what and under what conditions. And this is broadly true in health economics: treatment strategies have to be specified in terms of dose, mode of administration, target population, and so on, all of which can be the subject of ethical disagreements.
Once alternative strategies are specified in detail, you then have to specify exactly what costs you're going to include. In health economics, this is discussed under the header 'the perspective'. So studies are done from the health insurer's perspective or the societal perspective, etc., because they include the costs that would be relevant from that point of view. In a study done from the ‘societal perspective’ common questions are things like, “will we include patients' out-of-pocket costs?”
Finally, just like costs, we need to decide what outcome or outcomes are going to be represented in the model. So, we have decide what we think is important to know. Furthermore, we have make trade-offs. Outcomes like deaths might be relatively straightforward to measure but they may not be informative enough for our purposes. We might want information on quality of life, but we may question the specifics of the questionnaires, or the other assumptions that are under the hood of the QALY.
Decisions around what to include in the model are shot through with social values, because they determine what you consider to be definitely relevant to policy-making. Furthermore, you could always make the overarching judgment that the right data don't exist for a particular model and so it would be wrong to build it at all. So going ahead with building a model is a value judgment.
This is something that participants in the interviews spoke to. For example, one participant said, "We made the decision not to simulate anything in that area because the lack of evidence would just produce results that were just fantasy."
How to Model It?
Once we've decided what will be included in the model, we have to decide how to include it. In practice, this overlaps with 'What to Model' decisions, since those are informed by data availability. Nonetheless, I think it's useful to think about decisions about things once they're already chosen for inclusion in a model. So, the first question is what data sources are we going to use to estimate probabilities of events and costs and outcomes associated with alternative strategies? Here, we face all of the unique problems with different data sources. For example, we may want to use randomized controlled trial data, because we expect it to be less affected by confounding, but we may worry that RCT data isn't widely generalizable. So there are trade-offs involved. And the overall judgment is always whether a data source is adequate for our purposes.
Another key decision in this type of modelling concerns the so-called ‘time horizon’. So how long into the future is the model is going to represent things. Different health programs will have different ‘time profiles’, meaning their costs and consequences will kick in at different times. So, models with different time horizons will represent different costs and consequences.
A related issue is that most data sources in health will represent things over a relatively short period of observation. However, health policy-makers often care about effects five years or more after a decision. So the choice of time horizon invites more trade-offs. For example, there may be concerns about whether it's appropriate to extrapolate RCT data over time, but also concerns about not providing policy-makers the desired information. So again, there's a meta-judgment about when exactly a model might cross into fantasy, in an ethically bad way.
Related to the choice of time horizon is the practice called discounting. This where costs and benefits that happen far into the future are reduced by some percentage. And people will disagree about whether this should be done, and by what amount.
Another a major question concerns the representation of uncertainty. So often, there will be uncertainty around what range of values something might take in the world. For example, we might be unsure about how much a treatment will actually cost. So the practice is to do sensitivity analyses to see the effect of varying that parameter. But a few questions pop up: first, what value will be represented as being the most plausible? Then, how many different parameters will be varied, and by how much? Often, model results will be sensitive to the values that are chosen. And often results of different sensitive analyses would support different policy decisions.
A related decision has to do with representing differences between people. There might be evidence that costs and outcomes will vary significantly between different groups. So the question is whether this will be modelled, and if it will be, exactly how will different groups be defined. Depending on how groupings are done, you might see different results.
To summarize, these are some just major decisions that pertain to how model things in health economics. My interpretation is that each one of these decisions has an ethical dimension. Each one is a decision about what representation of things is going to be adequate for our purpose. And our purpose is to inform these policy decisions with health and economic consequences.
What to Conclude?
The last category of value judgments that I would carve out has to do with what to conclude. I'll divide them into the modelling context and the policy context, even though I think these overlap in practice.
First, on the basis of a finished model, what should we conclude is a 'fact'? This is a decision that the modellers or the scientists have a central role in making. And it has a direct influence on policy decisions.
Next, what are we going to do on the basis of the model results? This is a decision that policy-makers have a central role in making. But it's clear that health economists do sometimes directly advise policy-makers. I'll briefly mention two issues here.
First, how should we make decisions under uncertainty? One popular view among health economists is that we should focus on whatever looks like will give the most benefit, regardless of the uncertainty around it.12 And the rationale is that one of the alternatives has to be chosen and the decision can't be deferred, or deferring it is a decision in its own right.
Second, what factors should we consider that were not explicitly modelled? This is important, because often things are excluded from models because they're too time consuming or expensive to model, or simply impossible to model. And it's often said that policy-makers never consider only cost-effectiveness. They might also consider budget impact, equity, public opinion, fear of specific outcomes, and so on. This is important, because it raises the question of how to explain to the public what actually drove a policy decision, when ostensibly it was informed by a model that maybe they heard about.
So, I think that decisions around what to conclude all have an ethical dimension. Decisions around 'what to do' are straightforwardly ethical decisions. And I think decisions about what is a 'fact' have a special influence on those decisions.
‘Follow the Science’: Obstacles
So that's the end of my comments on values in modelling. I hope what you took away it is that there are social value judgments throughout the process. To summarize, I'll use a distinction that Eric Winsberg and I have made recently. First, decisions about what to model and how to model it are decisions about representation.13 They're decisions about what's adequate for our purpose, more than decisions about what's true or false- so they invoke the social values that link to our purposes. Second, after a model is complete, we might conclude what is a 'fact'. And these decisions have a unique social significance, and a different significance than decisions about representation.
So, now I want to comment more generally on why it's not straightforward to simply ‘Follow the Science’ in a health policy context. This is something that people do still recommend doing, at least on social media where it's a hashtag. So it's good to review the obstacles.
First, sometimes we hear that science can't tell us what to do. I think this is true, but I think it's a bit too simply put. Depending on how health policy questions are framed, it can sometimes seem like it's totally obvious what we should want to do, so it's not taken as a serious problem that science doesn't provide this direction. For example, it's sometimes been taken as obvious that we should aim to avoid COVID deaths, and follow whatever strategy we think will get those numbers down as far as possible. But the situation is more complicated, since that strategy has costs and consequences that not everyone thinks they're prepared to accept. This is leaving aside, for the moment, the fact that those costs and consequences are difficult and maybe impossible to estimate.
So, at the very least, it's a bit more specific to say, science can't tell us what to sacrifice or what trade-offs to make. Furthermore, we might do what Martha Nussbaum suggests and separate what she calls the obvious question of 'what to do' from the 'tragic' question of whether any of our options are morally acceptable.14 Like she says, cost-benefit analysis does not aim to answer whether any policy options avoid moral wrongdoing. And neither does science. So that's an overarching obstacle worth mentioning first.
Another obstacle is that the constructs in health policy-oriented science are value-laden. So there are causal claims that come pre-packaged with normative assumptions. Anna Alexandrova calls these 'mixed' claims.15 And I think this is the specialty of health economics. Every claim about what moves the QALY up and down is a mixed claim, since the QALY packs in a huge number of normative assumptions. But it's also a morally significantly choice to count deaths rather than to count QALYs or something else. In most health-policy oriented contexts, in order to ‘Follow the Science’ you also have to ‘Follow the Normative Presuppositions’.
A related obstacle has to do with the fact that different scientific study designs and representations make different trade-offs, in terms of what kinds of questions they help to answer. Donal Kohswrowi and Julian Reiss discuss this.16 For example, they note that data from randomized controlled trials is preferred for guiding policy because it helps control for confounding- but it only gives good evidence about average effects and doesn't it address distributive issues. So there's an ‘epistemic’ trade-off with a social implication. That's a core problem in health economics.
A related risk is what Eric and I have called 'Representational Risk'.13 This is the risk that a decision about scientific representation will not be adequate for purpose. For example, if we are interested in distributive effects, a model of RCT data may not be adequate for purpose- and there are social implications.
So to conclude I just want to say a few words about why I would like to see more interaction between health economists and philosophers. First, I'd like to see them inform the constructs that end up on the centre stage in health policy. For example, there's no philosophical theory behind the Quality-Adjusted Life Year and I wonder what we would use instead if we had philosophers' input.
Second, I'd like to know how philosophers think health economists should manage epistemic trade-offs and representational risk. And I think it would be good to involve philosophers in some of the conversations that health economists are being encouraged to have with the public about issues like this. An overarching question is to what extent value judgments should be made based on philosophical reasoning, on the views of the general public, and/or something else. Right now, health economics models are heavily influenced by what health economists and decision-makers think is right. And this is surely insufficient. So I'd like to see philosophers and health economists interact on this question.
Last, I think the biggest reason goes back to Nussbaum. Again, she points out that the obvious question of what to do is different from the tragic question of whether any option is morally acceptable. And she argues that facing the tragic question is important, because it informs our future actions. I think she means that policy-makers should think about how to limit tragic predicaments where there's no morally acceptable option. And they should consider when they might owe reparations to the people wronged in tragic predicaments. In my experience, this is a type of thinking that I don't encounter in health economics: the focus in that field is entirely on the obvious question and never on the tragic question. So I think interaction with philosophers could change how health economists think, and how they help health policy-makers make decisions and also plan for the future.
1. Drummond,M.F. et al. (2005). Methods for the economic evaluation of health care programmes. Oxford: Oxford University Press.
2. Culyer, A. J. (1989). The normative economics of health care finance and provision. Oxford Review of Economic Policy, 5(1), 34–56.
3. Chilton S., et al. (2020) Beyond COVID-19: How the ‘dismal science’ can prepare us for the future. Health Economics. DOI: 10.1002/hec.4114
4. Reddy, S.G. (2020). Population health, economics and ethics in the age of COVID-19. BMJ Global Health 2020;5:e003259.
5. See Harvard, S., et al. Social, Ethical, and Other Value Judgments in Health Economics Modelling. Social Science and Medicine 253: 1–9. and references therein
6. Donaldson, C., & Mitton, C. (2020). Health economics and emergence from COVID-19 lockdown: the great big marginal analysis. Health Economics, Policy and Law. doi:10.1017/S1744133120000304
7. Huot, C., et al. (2008). The cost of preventing rabies at any cost: post exposure prophylaxis for occult bat contact. Vaccine, 26:4446–50.
8. DeSerres et al. (2009). Bats in the bedroom, bats in the belfry: reanalysis of the rationale for rabies postexposure prophylaxis.
9. Coast, J. (2009). Maximisation in extra-welfarism: A critique of the current position in health economics. Social Science & Medicine 69 (2009) 786–792.
10. Cookson R., et al. (2008). Public healthcare resource allocation and the Rule of Rescue. J Med Ethics 2008;34:540–544.
12. See Claxton, K. The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. Journal of Health Economics 18 _1999. 341–364.
13. Harvard, S., & Winsberg, E. (2021). The Epistemic Risk in Representation. Pre-print. Available at: http://philsci-archive.pitt.edu/18576/
14. Nussbaum, M.C. (2000). The Costs of Tragedy: Some Moral Limits of Cost‐Benefit Analysis The Journal of Legal Studies , Vol. 29, No. S2.
15. Alexandrova, A. (2018). Can the Science of Well-Being Be Objective? Brit. J. Phil. Sci., 421–445
16. Khosrowi, D. & Reiss, J. (2019) Evidence-Based Policy: The Tension Between the Epistemic and the Normative. Critical Review, 31:2, 179-197.