Dear Readers: Models can be useful tools for healthcare decision-making— but how exactly does this work in practice? I talked to Nick Bansback (UBC School of Population Health) about the DecideApp, his team's new tool for communicating model results and informing healthcare decisions. Nick has a unique perspective that I think will resonate with many of us: whether you're a modeller, patient, or member of the public, check out Nick's take on using models in practice! Do you think more models on the Peer Models Network should be paired with tools like the DecideApp? Read the interview with Nick and let us know what you think. —Stephanie Harvard
[SH]: So Nick, if I understand correctly, you and your team have developed a tool that helps communicate the results of clinical prediction models. Is that right?
[NB]: That's right. Clinical prediction models are known by various different names: risk models, prognostic models, algorithms, calculators. We've created this open-source software that enables the results of these models to go into a tool that can be used by clinicians and patients in an understandable way.
So much research and money goes into creating these models, but too many do not have the sort of the impact that they should be having. Why? Because this final step of actually informing patients’ decisions doesn't happen. This software is trying to bridge that gap.
Can you tell me more about what a clinical prediction model is?
Prediction models come from a large set of data: collected information about people, and what has happened to those people, such as clinical outcomes. The data that’s collected can vary from simple demographic details, such as age and ethnicity, to complex data such as blood counts and genetic profiles. Researchers take these data and they build models to see how these characteristics relate to outcomes. So you can predict, for a new person, given their characteristics, how likely these outcomes might be to happen to them in the future.
How are these models used in practice?
There are different ways these models are applied. In public health, models are often used to predict future occurrence of a disease— so if you have a high likelihood of having the disease, you could use a preventive treatment. For example, a frequently used model calculates the risk of you having a heart attack or stroke in the next 10 years, based on your age and sex and ethnicity and various pieces of information about your other conditions such as having high blood pressure. If you are at high risk of heart attack or stroke, then you might be prescribed a drug like a statin or recommended to change your diet or things like this. That's public health.
In clinical practice, models tend to be used in two different ways. They can be used to decide whether you should have a more invasive procedure to see if you have an underlying disease. Take, for example, lung cancer: the more definitive procedure to determine if you have lung cancer actually has risks in itself, so you don't want to have that procedure unless there is a fairly good chance that you might have lung cancer. So there's a prediction model, that that uses your age and smoking history and other data to determine what your underlying risk of lung cancer is.
Another way models are used is to help determine if you should use a treatment. For example, they can help patients with COPD know if they have a high risk of developing bad outcomes in the future. The results can help patients and clinicians determine what the impact of lifestyle changes, such as stopping smoking, might have on your future risk of these outcomes. Or whether in taking preventative treatment—which has risks itself—the benefits might outweigh those harms.
What’s the relationship between a clinical prediction model and a patient decision aid?
A prediction model tends to end with a ‘risk’, a ‘probability’ of a poor outcome—that is, whether an individual will have a future outcome or event over a certain time period. Sometimes that number, that probability, is all you need. If your probability is 0%, or 100%, it can be pretty simple to know what you do next. But more often than not, this number needs to be contextualized with the options that are available, how the patient feels about those options, the features of those options, and the consequences of those options.
So that's what a decision aid does. It embeds the risks into a tool that actually helps inform the decision that needs to be made. So it's information on the options, what matters to the patient, so that a clinician can make a shared decision with their patient.
For example, if somebody has a 24% chance of developing severe lung disease over the next 10 years, what does that 24% mean? And how does this compare to the risk of side effects from taking preventative treatment? How much does that preventative treatment actually reduce my risk of severe lung disease? Does it go to 0% or does it go to 10% or 20%? Then the treatment itself: is the treatment easy to take? Does it cost? There's always other factors that go into a decision aid.
So to clarify: your tool both helps to present model results and it helps to create decision aids?
The right tool will depend on different contexts. For some contexts, a simple clinical prediction tool that just describes the risk might be sufficient—what’s wanted by both clinicians and patients. But in other contexts, just that risk may not be enough. That's where a decision aid might be more useful. So our tool has a number of different options. It can go from just describing risks, to actually understanding patients’ values, describing all the options, and creating short summaries that can then be used by clinicians in consultations.
Does the software have a name?
Yeah, we call it Decideapp, and you can read more on decideapp.org. You can download it on your own servers or we are hosting it on UBC servers as well.
Does your tool work with any of the clinical prediction models on the Peer Models Network?
Right now we've just created a demonstration of how it can be used with one of the models on the Peer Models Network (the ACCEPT model). That’s just for demonstration, but it could be applied to any of the models on the Peer Models Network, on any clinical prediction model that people are interested in.
That said, there's no cookie cutter approach to these tools. Each one needs to be considered carefully: how it might be used with clinicians, the patients that might be using it, the situation or circumstances they might be using it in. There needs to be some consideration of what needs to go in and how the risk needs to be interpreted. But once you've done that stage, the software can be used to create that tool.
It’s important to know that clinical prediction models can’t “predict the future” for an individual patient. How do you think that should be explained, or affect how models are used in practice?
That's a great example of how to describe risk. Research shows that most clinicians explain risk poorly to patients. It's almost impossible to describe what might happen to you. But it is best to explain risk by what happens to people like you or: “Imagine there's a hundred people like you. This is what our best estimates expect what will happen to those hundred people”. What these prediction models are really doing is looking back. What happened to people like you before might be useful for understanding what happens to you in the future.
Were patients involved in developing the Decideapp?
We've developed nearly 20 different tools with this software so far and in nearly all of them patients have been involved in various ways in the development. And I would really recommend that people who are considering developing their own tool engage patients early and throughout the process.
One example I can think of is our is one of our risk graphics. You may have seen these graphics before. It’s an image of a hundred people and some of the people are illustrated in a different color— to describe the number of people who will have an event, based on probability. Research shows that these graphics are helpful for people to understand risks. But when we worked with patients and showed them this graphic, we still found a good proportion of people didn't understand what that meant. So we developed an animated version of this graphic with patients, which describes what it means for them. We wouldn't have done that without patients being involved in in a development of these tools.
If we paired the Decideapp with models on the Peer Models Network, could patients give input on their experience using them?
Yes! I think all these tools that use the results of models need to always be improved upon. I think we also need to recognize that no one tool will be the best way of describing the model results to all patients. We might actually realize that we need to create different versions of these tools, for people who are more or less numerate, or who have a preference for graphics over writing, for example. So that we create something that is accessible to more people. I think that's where I think getting feedback from patients about their experiences—both good and bad— would be really helpful.
If the sky was the limit, what would the future of this tool look like?
If the sky was the limit, I would ensure that every research project that is developing a clinical prediction model uses a tool like ours. And works with patients from the very start, so that research is has the highest chance of actually being implemented into practice.