Dec 1, 2017
Dr. Pennell discusses survival as an important indicator of the quality of cancer care with author Lawrence Shulman.
Related Article: Survival As a Quality Metric of Cancer Care: Use of the National Cancer Data Base to Assess Hospital Performance
Support for JCO Oncology Practice podcast is provided in part by AstraZeneca, dedicated to advancing options and providing hope for people living with cancer. More information at AstraZeneca-us.com.
Hello, and welcome back to the ASCO Journal of Oncology Practice
podcast. This is Dr. Nate Pennell, Medical Oncologist at the
Cleveland Clinic and Consultant Editor for the journal. Over the
last decade, there's been an important movement to try and improve
the quality of medical care in the United States. But to do that,
of course, we have to have reliable measures of quality.
But how do you really do that? Is it enough to measure compliance
with guidelines or expert recommendations for high quality care?
Ultimately, you might think that high quality care should lead to
improvements in survival for cancer patients. And naturally, that
leads to the question of whether survival could be used to compare
the quality of care between different practices or different
hospitals.
Joining me today to talk about this topic is Dr. Larry Shulman,
Deputy Director for Clinical Services of the Abramson Cancer Center
at the University of Pennsylvania and leader of the Cancer Quality
Program for the University of Pennsylvania's health system. He's
also the former chair of ASCO's Quality of Care Committee, former
chair of the Commission on Cancer's Quality Integration Committee,
and currently the chair of the Commission on Cancer.
Today we'll be discussing his recently published paper, Survival as
a Quality Metric of Cancer Care, Use of the National Cancer
Database to Assess Hospital Performance. Larry, thank you so much
for joining me today.
Thanks for having me.
So can you give us a little bit of background? What are the some of
the challenges in measuring quality of care, and what led you
eventually to do this study?
Well, a decade ago, we were doing very little to measure the
quality of cancer care in any respect. And then ASCO, in the early
2000s, started the QOPI program. And at around the same time, the
Commission on Cancer began a quality program as well. And as you
mentioned, most of the quality metrics that are included in those
two programs are process measures, that the patient with a certain
stage of disease get the appropriate treatment and so on. And those
are very important metrics, and we've learned a lot from measuring
those.
But at the same time, people have complained, including the public
and the regulators, that we really need to know outcome quality
measures. And the most important outcome measure for many people is
survival. And survival is really, presumably, a culmination of all
the aspects of care, not just whether you gave a particular
treatment, but the other aspects of care that help patients to
either do well or not do well. So that appeared to be an important
measure.
I will say that there are a number of centers around the country
that published their own survival metrics on their website with a
variety of comparisons. And we were concerned that that was not
really truth in advertising, and we wanted to understand measuring
survival at the hospital level and also at the a hospital type, the
class of hospitals. And that's what led us to do this study.
And that makes perfect sense. I mean, ultimately, when you're just
measuring metrics, perhaps through ASCO's QOPI program, ultimately
you're making an assumption that that's leading to better outcomes.
But it would be nice to have some proof that that was true.
So could you please walk us through your paper a little bit? So
what were you trying to accomplish with this particular study?
We queried the National Cancer Database. The National Cancer
Database includes cancer registry data from 1,500 hospitals across
the US that are accredited by the Commission on Cancer. And our
estimate is that that covers about 70% of the cancer patients in
the country. So this is a very robust database, and currently there
are about 36 million patients in this database.
So we decided to look at patients with two different diseases, and
we had very specific reasons for including them. We looked at
patients who had Stage 3 breast cancer, and we did that because
those patients ordinarily receive surgery, systemic therapies, and
radiation. And we wanted to assess some disease where all the
modalities were involved.
In addition, most of the technologies that we need to treat breast
cancer patients are available at hospitals throughout the country,
community hospitals as well as academic centers. The capability to
do good breast surgery, give the types of systemic therapies we
give for breast cancer and radiation, are widely available.
We also chose advanced non-small cell lung cancer, and we did that
because that's a changing paradigm. The use of genomics to identify
patients who have targetable mutations is not widely used
throughout the country, and we wanted to see whether there were
differences that we could assess in different hospitals and
different hospital types.
And so what we did was we looked at both unadjusted survival, which
is basically how many patients were cared for with a particular
disease, and the death rates. And then we also looked at
risk-adjusted survival because the patient populations are not the
same at all hospitals. And so we risk adjusted for a number of
variables, including age and gender, ethnic background,
socioeconomic status, comorbidities, and insurance status.
So I won't ask you to delve into the details of how the analysis
was done, but what did you find?
So we found three major points. One is that we looked at survival
at the individual hospital level across the 1500 hospitals. And
when we did risk adjustment, we found that very, very few of the
hospitals had survivals of their patients that were either
statistically better or statistically worse than the mean. And in
fact, it turned out to be about 15 hospitals out of the 1500
hospitals that had survivals that were statistically better or
worse.
And there are two reasons for that. One is that the survivals were
in a pretty tight distribution, so that there wasn't a wide splay
of survival differences among the different hospitals. And
secondly, even large hospitals, comprehensive cancer centers, have
relatively few patients with a particular stage of disease that can
be assessed for survival.
So we felt that at the individual hospital level, this would not be
a good quality metric to distinguish levels of care, and that some
of the people who have argued for using survival to ultimately
assess the quality of care of a hospital, this is probably not
where we want to go.
The second thing that we found was that we looked at hospital
types. So we aggregated hospitals into four groups. One was
NCI-designated Comprehensive Cancer Centers. The second were
academic cancer centers that were attached to a medical school and
a training program. Third was large community hospitals with more
than 500 new cases a year. And the fourth were small community
hospitals with 500 or less cases a year.
And what we found was that when we aggregated those hospitals by
these categories, there was a difference in survival. And the best
survivals were seen at the NCI centers, followed by those in the
academic cancer centers. Third were the large community hospitals,
and fourth were the small community hospitals.
We spent a lot of time with the editors of JOP trying to assure
ourselves that the statistical evaluations were valid. And in the
end, they felt they were valid enough, obviously, to publish the
manuscript. So I really think that the findings are real. And the
question is why is there a difference in survival by hospital
type?
And this study doesn't answer that question, but I think it gives
us enough information to ask the question and start to delve deeper
into what might be behind these survivals. I will say that Peter
Bach and his colleagues from Memorial Sloan Kettering published a
manuscript a year or so ago that had very similar findings. So I
think this is a real finding, and we need to deal with it
nationally.
Yeah, I think that's interesting. And I know that's not the first
time that something like this has been shown, although I don't
recall seeing such a linear breakdown from small center to larger
center to academic center to NCI Conference Cancer Center. Do you
have any speculation about what might be happening, or what could
be done to try to further dig in to that?
You know, I'd like to make one point before we get to that. And
that is that the answer to the question of how do we deal with
these differences can't be that all patients should go to NCI
Comprehensive Cancer Centers. The answer should be, we need to
figure out what the differences in care are and try to figure out
how to improve the care in the community hospitals because
patients, most of them, should be able to stay in the hospital near
their home, and the NCI centers and academic centers don't have the
capacity to treat all the patients in the country.
So we need to understand what these differences are. We are in the
process of doing a very deep dive into some of the quality metrics
that the Commission on Cancer uses to accredit hospital programs
and correlate those with survival outcomes, again, by hospital type
to see whether there are correlations between compliance with those
quality metrics and survival outcomes. And that work is underway,
and I'm hoping that the early analysis will be available soon.
No, I think that's a good idea. I know that breast cancer programs
have to go through a fairly rigorous accreditation. And I don't
know if that's something that's included in the NCDB about whether
they're accredited or not, but it might be worth looking to see if
that makes a difference.
So we actually do have that information. And there's the
accreditation program, what we call the NAPDC, the National
Accreditation Program for Breast Centers. And they tend to do a
little bit better in some of the quality metrics that we haven't
looked at survival in those centers. But to some extent, it's a
self-selected population. People who are vying for breast cancer
accreditation have a special interest and focus on that
disease.
I'm glad you pointed out that when you talk about large databases
like this and looking at populations of people, that this is not
something where, if you're getting your treatment at a small
community center, you need to immediately leave and go to a big
center somewhere in a big city. You may be getting perfectly
appropriate care where you are, and you can't extrapolate from
populations like this to your individual doctor's practice.
That's absolutely correct.
So one thing that jumps out at this, though, is I do have the
privilege, as do you, of working at an NCI designated Comprehensive
Cancer Center. And I know that many of our centers put out these
publications where they're really marketing documents that show our
benchmarks survivals compared to, say, the SEER Database or NCDB.
And the implication is that you're going to get better care and
potentially going to live longer if you come to one of these
centers. So what do you think about that, given what you found in
your study?
Well, frankly, I think it's a bad idea. And in fact the Commission
on Cancer specifically prohibits-- though not everybody follows the
prohibition-- prohibits the use of the survival data in public
reporting. And the reason we do that is because we think that the
comparisons are really not valid. And the cancer centers that I
know that have used that in their publications or on their websites
have generally used unadjusted survival, which is even further from
being valid than risk-adjusted survival.
So we would discourage that. We don't think that it's really truth
in advertising, quite frankly. And it's against the Commission on
Cancer's formal policy to use NCDB data in that way.
So what would be your take home message from your study? Do you
think that survival is not going to pan out as a comparator from
practice to practice or hospital hospital? Or is this just not the
right way to look at that?
No, I think that's correct. I think that we need to tell the payers
and the government and other regulatory agencies which are thinking
about ways to assess the quality of either practices or hospital
programs that at least currently, we don't think that survival is
the appropriate metric. But we do think that it raises a red flag
for how care is being delivered across the country.
And we do think that it's our obligation as a profession-- and I
think the oncology profession should take this initiative-- that we
need to figure out where the opportunities are to assure patients
who walk into any hospital in this country that they're getting top
level care and have an equal chance of survival as if they walked
into another hospital or a Comprehensive Cancer Center. I think, as
a profession, we need to take these data seriously and act on
them.
Is there anything we didn't cover that you wanted to make sure we
highlighted from your paper?
The only other thing I would say-- and this was a little bit of a
surprise-- again, we chose breast cancer because everything should
be available everywhere, and lung cancer, maybe not. But the
findings were identical for the two diseases. So we didn't see any
difference in the breakdown of likelihood of survival by hospital
type for breast cancer or advanced non-small cell lung cancer, for
whatever that's worth.
So I think as we start to delve into what the factors are that
drive survival, that we need to, again, take that into
consideration. It's not just technology availability or not, but
there must be other factors as well.
Yeah. The challenge of taking something like a pure clinical trial
population, and then suddenly looking at an entire general real
world population and trying to see if you're having the same levels
of effects is something that I know everyone is interested in
doing. We know in practice that it's a completely different
scenario. But it's hard to delve into exactly what's happening to
those patients in the real world.
Yeah. No, absolutely.
Well, Larry, thanks so much for talking with me today.
Thank you for your interest and for reaching out to me. And we look
forward to the input from our colleagues as people start to read
the manuscript, and also ideas from others about what next steps
might be. So thank you very much, Nate.
And I also want to thank all our listeners out there who joined us
for this podcast. You can read the full text of this paper at
ascopubs.org backslash journal backslash jop, published online
November 1, 2017. This is Dr. Nate Pennell for the Journal of
Oncology Practice signing off.