Hospitals are institutions that by their nature hold life in high regard but in recent months, in western hospitals at least, the focus has switched to death. The publication of hospital death rates in the US, the UK and Australia has raised serious questions about the usefulness and accuracy of such reports.
Some practitioners and medical bodies fear that the publication of mortality rates could unfairly damage the reputation of medical institutions serving certain demographics, catering for unforeseen events or treating severe illnesses.
Nevertheless, consumer choice seems to be overruling such concerns and this method of rating could even extend in future to private healthcare organisations if medical insurers have their way.
Death rate statistics
Late in August 2008, in the US, a website offering comparisons on care measures took the debate one step further. Using death rates made public for all hospitals covering numerous care measures, from heart attack to pneumonia, the Medicare and Medicaid Services site allowed users to compare up to three hospitals at once.
The data used is submitted voluntarily by registered hospitals on a range of care practices – from the administering of medication before incisions to the big “D” itself.
What it doesn’t show is where discrepancies in these overall figures could lie. For example, seriously ill patients in rural areas in the US are often treated in local hospitals first before being relocated to larger care centres later in their treatment. This is, of course, likely to push the care vs death rate out of kilter.
The UK, having recently introduced the publication of hospital death rates through NHS Choices, is also talking about producing a “scorecard” which could work as a “composite measure of quality of each speciality in each institution”. Death rates released in June have already led to reports that NHS Trusts were well below the survival rate expected for differing types of hip, knee and heart surgeries. Figures for 50 more different areas are to be published in coming months.
Fear over fact?
What do these figures ultimately prove and how will they help hospital management in the long run? Do such figures take into account the health of individual patients before they are admitted to hospital, bearing in mind that one in every 11 deaths in the UK is now linked to patients carrying excess fat?
Do they take into account the effect of staff shortages that may be as varied as a nursing or cleaning strike? And how will they fairly measure the increased likelihood of death even before a patient is admitted?
Openness and accountability is to be welcomed. However, before allowing an unsuspecting public to make what is perceived as an informed health decision that could, in effect, be life or death, the parameters of what they are choosing should be carefully looked at.
Statistics that use a bell curve allowing the input of more random variables (such as the issues described above) would surely be a better way of allowing patients more control over their choice, while offering a ‘real’ view of a hospital’s performance.
Maybe then in places like Australia, where private hospitals are now being called on to produce in-house statistics for use by government and medical insurers, performance could actually be a guideline for the ongoing management of a hospital, its staff, training and infrastructure, instead of being a panic button for the media and consumers alike.