Mohsen Sadatsafavi on the EPIC Model
SH: Today we’re going to talk about EPIC, but before that, would you tell me your answer to the question 'what is a model'?
MS: Well, it’s simple only on the surface. A decision model, or a prediction model, or weather forecasting model, any model?
SH: Just the question of what is a model.
MS: A model is a tool that uses computations to turn what they call 'evidence' into projections of futures, especially on the results of our actions.
SH: So what is a decision-analytic model?
MS: Decision-analytic model is a computational tool that helps us project the outcomes of various scenarios when it comes to policies. Like, what if we pay for this drug, or approve that health technology? Again, it’s a computational technology that helps us project the outcomes of a decision at the policy level.
SH: Can you tell me a bit about EPIC?
MS: EPIC is a decision model for chronic obstructive pulmonary disease or COPD. It’s a common chronic disease of the airways. EPIC is a model that doesn’t answer just one decision question. It has the capacity to answer a lot of decision questions along the care pathway of COPD, from pre-diagnosis all the way to late complication.
There are a lot of decision questions to be answered in COPD. For example, if a provincial Ministry of Health or a Health Authority wants to fund a certain health technology for COPD, that would create a decision problem that needs to be solved. Instead of solving those decision problems one at a time by creating a new model every time, we decided to create a so-called 'Whole Disease' or reference model, such that these decision problems can be answered consistently using the same model. So EPIC is a Whole Disease model of COPD that is capable of answering these decision questions across the whole pathway of care.
SH: Can you give a few examples of questions that EPIC could be used for?
MS: For example, COPD is a highly under-diagnosed disease, right? For every two Canadians who know they have COPD, there are three who do not know. If we can't find Canadians with COPD we cannot help them with the disease. The problem of under-diagnosis of COPD has a lot of decision problems attached to it. Whether to screen for the disease or not, whether to do case-finding or not, case-finding at family physicians’ offices, and so on. That’s one set of questions that EPIC can answer. EPIC is capable of telling us, 'if you do a population-wide screening for COPD, over the next 25 years you’re going to have that many cases of COPD diagnosed. It’s going to cost you x million dollars to roll out this program, but then you’re going to save y million dollars because now you’re helping COPD patients and reducing their burden of disease, and you are improving quality of life by x years at the population level'. So, it makes a projection of both cost and resource use and also projections of what we call humanistic burden, the gain in quality of life and longevity that we get by implementing a technology.
Imagine there is a new COPD drug coming on the market. Health Canada has approved it, but the BC Ministry of Health is considering whether they should put the drug in the provincial formulary or not. EPIC can be used to compare this medication against alternative drugs that are already available to patients and say, 'instead of the current standard of care, if we use this drug for this eligible patient population, in the next 20 years we’re going to save that many lung attacks — which is a feature of COPD — that much quality of life, but it’s going to cost us an extra, x dollars'. And you can use these numbers to calculate what we call the incremental cost-effectiveness ratio, which is a metric that can be used to decide what is efficient resource allocation, whether we should fund this drug or not. Or if you’re going to put up a campaign for smoking cessation, the same 'what-if' questions can also be answered.
Another question is, if a patient with COPD has experienced a lung attack and has been in hospital, what kind of support do you want to provide to them when they go back to home and to their community? Do you want to put a nurse in touch with them to call them every week so that the next lung attack doesn’t happen, or at least its risk is reduced? EPIC is capable of answering the consequences of these decisions and whether they provide value for money.
SH: I've understood from you that EPIC was not built to answer one specific question. Did that affect the strategy in building the model?
MS: Yes, it affected it in many different ways. It has great implications on how you develop a model. Whenever you have a single-purpose model, you design the model towards answering a specific question. The path in front of you is relatively clear and you have a level of parsimony, I would say. You know, diseases are complex systems. Whenever you model only one aspect of a disease, you ignore the complexities that do not directly affect the problem in front of you. Once you want to build a model that is capable of answering at least a reasonable set of potential questions, then proper modelling of the disease pathway becomes important. A Whole Disease model or reference model is an order of magnitude more complex, in terms of even conceptualizing it.
Another aspect is, usually when you have a single-purpose model you are comparing individuals who will receive the health technology versus not. You mimic the design of a randomized clinical trial, as if you are giving some patients the health technology and others not. But when you have a reference model that is supposed to be capable of answering prevention questions, like screening, that design doesn’t work anymore. You have to adopt a so-called open-population design, in which case you are actually following the entire population in the jurisdiction of interest- in Canada, in the case of EPIC.
Another one, and a very critical one, is interaction between different components of the model. Decisions, and the responses to them, affect other decisions. For example, when there is a screening program for COPD in Canada, there may suddenly be many more COPD patients diagnosed who are eligible for COPD therapy. But these new patients are generally minor cases, because if you have severe COPD it’s more likely to be diagnosed; so you have a bigger COPD population, but generally the disease is milder on average. So whether or not you have a screening program in place will affect the cost-effectiveness of COPD treatment. When you have a reference model you have to be mindful of these interactions that exist.
Also, if your model development is detached from any specific question, you cannot directly apply 'best practice recommendations' that are out there. How to test the validity of a model, how to compare the results of your model with previous cost-effectiveness and decision-analytic results. Because at the beginning you don’t have any projections in mind. You just want to do a good job of simulating the disease process. A lot of design decisions get impacted by the fact that you don’t have any specific decision problem in mind. Rather, you are creating a platform that will answer a lot of decision problems down the road.
SH: How was the process, then, to decide what to include in the model?
MS: Well, we didn’t use the gold standard for how to develop a model. We were learning ourselves. We walked along the path in a little bit of a zigzag, but at the end I think we covered the bases. Do you want me to talk about what would have been the ideal scenario or our personal experience in developing EPIC?
SH: I think both are interesting.
MS: When you’re developing a reference model, the first step is having a blueprint, like a so-called influence diagram or a conceptual map. You need to have a high-altitude map of what you’re going to do and what factors are important.
Any disease has certain chief actors influencing its natural history. What are the important risk factors? What are the important biomarkers that define the disease progress? What are the important outcomes that are relevant to patients and research community and clinical care community? These will decide the so-called structure of the model. And the best thing you can do once you start thinking of developing a reference model is to put a team together and start creating this conceptual map. It’s like a constitution. It doesn’t give you high granular detail of exactly which steps you’re going to get, but nonetheless, it is out there and it is like a high-level guide for you and your team.
We didn’t really kickstart this project before we had a good, reliable source of data for lung-function projection. Lung function is a metric used to define COPD progression, so a COPD model should properly model the trajectories of lung function decline in individuals. And there is ample evidence out there relating patient-related outcomes and clinically-relevant outcomes to lung function. So, once you have lung function in place, you can map other outcomes. It’s like a trunk of a tree: once you have it, you can add branches and leaves and so on. We started from the data that came our way, then adding branches to that.
Our guidance in EPIC was a grant that we had and certain technologies that were being developed as part of that funding activity; we were supposed to determine the cost-effectiveness of those technologies. So we had a set of decision problems in front of us. If I was dealing with one decision problem I would have created another de novo COPD model. But because that grant was large-scale, and we had like four decision problems that were kind of unrelated to each other, that became our motivation, “Let’s create a model that can answer these four decision problems at the same time in a consistent fashion”. Halfway through, we realized, “Oh, we are developing a Whole Disease model. Let’s go back to square one and do it properly."
We ended up with some wasted effort. We ended up moving along a certain path, and then stepping back, and then walking the path that we should have walked at the beginning. Our path in EPIC was a little bit data-driven and objective-driven at the beginning, with eventual course correction, developing the Whole Disease model after one third of the work was done. Ideally, when you develop a reference model, you should have a conceptual map at the beginning, not halfway through.
SH: In developing EPIC, who has been involved from among different stakeholder groups?
MS: Given the special circumstances at that time, EPIC was heavily influenced by the research community. EPIC coincided with the creation of the Canadian Respiratory Research Network and my role there as the leader for the Health Economics platform. There was then a policy specialist, an expert in the science of policy-making and economics, but not necessarily in COPD. We also had a card-carrying clinician, who had a very deep understanding of COPD as a disease and the clinical outcomes. We had some involvement by policy-makers. But to be fair, I don’t think it ever materialized beyond conversation. A lesson I learned is I involved them too early. We were still years away from creating the platform that can answer their question. We didn’t get significant input from them.
Patient involvement was nonexistent at the beginning stages of EPIC. I would say we missed the boat at the initial stages. But we could catch up once we had extra funding that helped us have a patient group. It came towards the second half of the model development.
SH: So you do have a patient advisory group now?
MS: Not like a group of patients that are advising us every time. But, yeah. We missed one year, but for two other years we put together a patient advisory meeting about EPIC. It was being put together by BC Lung Association.
SH: You recently wrote this article with Amin about the value of making clinical prediction models available online. Does that argument apply to other types of models, like decision analytic models?
MS: Not all the elements of the argument, but definitely overall. The value of accessibility – this is what we are really promoting – materializes in different ways between decision-analytic and clinical prediction models. First of all, there is scientific reproducibility. When you write a paper on a typical scientific experiment, the paper, plus maybe a few pages of supplementary material, should be enough for that work to be reproducible. That’s oftentimes not the case for decision-analytic models, specifically for reference models. So many decisions you have made, so many bits of data analysis that you have done, it’s impossible to incorporate all this information in a typical length manuscript. As a result, an economic evaluation, often, is not reproducible. By sharing a decision model with the public, it becomes part of the paper. If you really want to hold up to the standard of reproducibility in science, the only way for a decision analytic model is to make it accessible. That’s one aspect.
Another one is, the questions that we are answering, the policy questions, are oftentimes shared across many jurisdictions. By making models available you significantly reduce the amount of duplicate work that is being done. If everybody had shared their COPD model we probably didn’t have to create a new model to begin with.
Also, in general, I believe credibility also goes a long way. You know, if we are making decisions that affect lives of people, and the apparatus of making a decision can be made public so that members of the public can examine it. It’s really a quality improvement process. If I know that my model is going to be scrutinized over and beyond, I do a much better job. I think the quality of the models is significantly improved this way, specifically in health economics.
In other disciplines that people are doing modelling – say weather forecasting – they are being tested pretty much every day. Weather forecasting models are being tested every day and every weekend. And that quality improvement feedback is incorporated. Bad models are very easily phased out because they make bad predictions, and good models prevail.
In health policy we do not have that luxury. It is not that we say, “Oh, screening for COPD has this cost-effectiveness" and then a few years later we can actually measure its cost-effectiveness, because you either implement it or you do not implement it. And you will seldom have a chance to measure its cost effectiveness once you implement it, if you do implement. Because in health economics and health policy, you seldom get to see whether a model makes a proper prediction or not, that feedback loop is broken.
To me, making a health economics model accessible and transparent as much as possible is going to compensate for a quality control that we do not otherwise have.
SH: In terms making EPIC available online, what’s the best outcome you could hope for?
MS: Again, it’s a reference model. It is custom-made for Canada but, really, only incremental work will be required to to apply it to other jurisdictions. The best thing that could happen is to see it being used by as many people as possible. That would be the return on investment from a publicly-funded grant perspective. And I want people to not only use it for their own decision problems in COPD, but also provide feedback that will improve the quality of the model. “Why did you not have this study as part of evidencing your model? Why did you not include that outcome as?”. So that it becomes a constantly improving prediction platform.
SH: I know you’ve read the piece I did recently on social value judgments that influence model development. Is there an intersection between it and your own experience building models?
MS: What I learned from your work was the concept of the 'value-free ideal'. Before reading your paper, I probably was kind of like that myself. In my mind, science was totally objective, case closed. But your argument, and the review of the literature that you did, was quite convincing to me that that’s not the case, that will probably never be the case. Values are with us from the get-go, from even asking the question, all the way to how you communicate the results.
I'm not able to quantify, but I have this vague feeling that there are levels of social value requirement in different types of modelling. For example, I can make a broad claim that value judgments are more common in decision-analytic models than in clinical prediction models. When you are developing a clinical prediction model – it's not that it’s not there, but it’s less susceptible to your own decision-making based on your own values, or other people who can influence that, compared to a decision-analytic model.
Within the realm of decision-analytic models, if the question is imposed on you, you have a path towards developing a single-purpose model, there’s still a lot of value-driven decisions. But when you are developing a reference model, you have way more decisions of that type, no question about it.
I have an example of this conundrum that we had, that I think is connected to this.
SH: Tell me.
MS: When people talk about COPD, you know, it’s the number one reason for hospitalization in Canada. So hospitalization is a very important outcome. Any predictions that you make should have a statement on how it changes COPD hospitalization. Everybody is worried about it. Not everybody in the community, but all the decision-makers, all the stakeholders that are providing input, they want to see the numbers on COPD admissions. The Canadian Institute of Health Information collects hospitalization rates by disease across the country and provides very high-quality reports. Those reports are really of high quality because hospitalizations are not an outcome that you have a struggle in defining. From the outset I knew that this model is going to need to do a very good job of properly projecting the number of COPD admissions in the country, in terms of aligment with what CIHI reports.
The conundrum was whether we should use the COPD admission rates that CIHI reports as an input parameter to the model. That is, make sure risk equations are calibrated such that they give us the number of COPD admissions that the CIHI reports. When you are developing a model from the ground up, you can look at a statistic like COPD hospitalization as an output of the model or as an input. If I am doing a good job of developing the COPD model without using numbers reported by CIHI, I want to see that my model predicts the hospitalization rates that are close to the numbers that are reported by CIHI. Otherwise, there is something wrong somewhere; something is not working very well under the hood of the model.
So, again, a big conundrum that I had was whether to ask that the model developers incorporate those CIHI numbers and calibrate those risk equations, or whether I should wait to the end, without even telling them my risk calibration idea, and then ask them how the model is performing. I decided on the latter. I decided to use CIHI hospitalization numbers as the so-called validation target.
The model structure would have been very different if we had used the CIHI hospitalization data as an input parameter. It would have gone as an input parameter in the model, into the risk equation. Instead, we used it as a validation target. It ended up working well, but that was almost a random decision. I don't know to this day how other investigators would decide that one. Whether to use a data piece as an input or as a validation target in a reference model, it was an open question and I don’t think there is any hard, scientific answer to that.
SH: The challenge there is how to understand how it changes the model’s usefulness, or reliability, or people’s confidence in the results, or results.
MS: I think it touches on the notion of model reliability and credibility. If we had used CIHI reports, the model would have predicted a hospitalization rate that is very close to CIHI report, but, of course, that’s not an achievement. Instead, I hid that piece of data from the model development team and asked them to make a projection and then compare what is coming from their model with what is coming from CIHI. Once they were close to each other, my personal confidence in EPIC went sky high. This is one of those few instances that, as in weather forecasting model, we had a chance to see if the model makes proper predictions.
A decision model is built on a lot of data points. Whether you want to use all those data points as input parameters or reserve some of them to judge the prediction power of the model, I think that’s in the mind of every modeller.
SH: What are the next steps for EPIC?
MS: A model like EPIC is really a living entity and requires constant attention. Every few weeks there are new studies that have some relevance for EPIC, for example, we should update this input parameter of EPIC or we should recalibrate that module of EPIC because this new study has come out. Evidence gets updated and EPIC needs to be on top of the evidence. Nobody will use EPIC if it based on old assumptions. And, of course, new questions are coming up. As we move forward, there are new technologies, people are talking about new biomarkers for COPD that will define how the disease is progressing. It will require us to go back to the drawing board and incorporate those new understandings of the natural history of COPD into the model.
In general, I would say the plan for EPIC is to make iterative updates, keep it targeted to ask questions. One is to investigate the benefit/harm of preventive antibiotic therapy for COPD. It’s a very topical question, because a major clinical trial came out a few years back to show that if you give this broad-spectrum antibiotic to patients with COPD you will reduce their exacerbation rate. But then is it cost-effective? Is it net-beneficial? In order to answer this question, we’ll have to keep EPIC up-to-date as more sources of evidence change some of the modules, maybe even adding new components to the model.
So the plan is to keep it going on. As you know, our COPD research program in the lab has expanded a little bit. For example, through the HESM grant we have a patient committee that we have put together. Right now, we’re talking about risk communication. We are exploring how they want to see the results in a way that is interpretable to them. Once you pass that milestone, the next question is are they happy with the outcome, the way our prediction platforms are projecting, or are we missing something.