The ‘worried well’ – people who pester their doctors about minor ailments or have the money to pay for regular, and often unnecessary, tests and screening – are a phenomenon well known in the US, and, against the backdrop of an increasingly affluent aging population, one becoming increasingly common in Europe and the UK.
Last year UK advocacy group the Self Care Campaign, which includes representatives from organisations such as the Royal College of General Practitioners, the NHS Alliance, the National Association of Primary Care and over-the-counter medicines trade body the PAGB, estimated that common treatable ailments accounted for almost a fifth of GP appointments and were costing the NHS around £2bn a year.
On top of this, and a key issue for radiologists, radiographers and the diagnostic imaging industry, is the rise in private screening tests. On the face of it, such private testing, which will often range from simple blood tests to whole-body scans, appears to be a good idea: there is clearly a demand from those able to afford it, it can provide reassurance and regular health monitoring, and does not directly place a burden on the NHS.
Except, according to the British Medical Association (BMA), this is a much too simplistic argument because private screening tests have a whole series of potentially negative ramifications, for patients as well as doctors and imaging professionals.
The BMA, which has been campaigning against the spread of private screening tests since 2005, is becoming increasingly concerned about private providers failing to explain fully to patients the potential benefits, risks and limitations of such testing, meaning they cannot make informed choices. It is also becoming increasingly worried about what knock-on effect this ‘industry’ is having on state-run NHS services, with GPs and secondary care providers often having to pick up the pieces after a patient has been through a private screening process.
As professor Vivienne Nathanson, director of professional activities at the BMA, explains: “I think the key issue for us is that, when a well individual is persuaded to go straight for screening without it being contextualised, then what can happen is that you might find abnormalities that, while they are not the norm, are not significant. It may be that you find abnormalities that are essentially false positives.
“Then what happens is that you go back to your GP and ask for a follow-up, which may have its own risks and possibly starts to get expensive. It is possible then that the NHS may, as a result, pick up tiny numbers of people who need treatment and are symptomatic or have no symptoms at all.”
On top of this there are worries about radiation exposure that can result from certain types of screening – whole-body CT scans, for example – especially if such screening is medically unnecessary.
“You will also get patients who have had a lot of tests and whole-body CT scans and then discount significant symptoms because they’ve just had a health test and believe they are all clear. But that is exactly the point when they should be seeing a GP who should be starting to interrogate their symptoms,” adds Nathanson.
“Moreover, if you tell people there are going to be a lot of false negatives and positives, then they will want to do a lot of follow-up testing to ensure that nothing is missed. People seem to think these tests are foolproof.
“If you have a lot of people having CT scans, which are picking up abnormal variations, you have to start spending a lot of time chasing up tests that are unnecessary, cost money and resources, and can cause a lot of stress for the patient. Then there is the fact it means patients come to their GPs feeling frightened and stressed, and it is the GPs and the NHS that have to deal with it. It is difficult sometimes for GPs to come up with an answer.”
Within the UK at least there is no single body that regulates this market, although there are ionising radiation regulations. From October 2010 all providers of screening services in England were required to register with the Care Quality Commission, the independent body created in 2009 to regulate and inspect health and social care services provided by the NHS, local authorities, voluntary bodies and private companies.
In the same month, the Department of Health’s UK National Screening Committee published guidance designed to help the public make decisions about the growing area of private screening tests, suggesting a set of eight key questions that patients should be asking of providers:
- what do I hope to gain from having this test?
- can I get the information I need another way?
- can I get this test on the NHS?
- is the screening company properly regulated?
- what does the fee cover?
- can having the test do more harm than good?
- what if the test results pick something up?
- what if there are no clear results from the test?
Overall, emphasises Nathanson, what is required is a more centralised, proactive, regulatory framework within this area, including around the marketing of information to patients.
“The government, in our view, needs to be looking at this area and considering whether regulation is necessary,” she says. “The reason for this is it is the government that picks up the cost of these tests, because it is the NHS normally that has to do any second and subsequent testing to isolate what may or may not be significant abnormalities.
“It does need to be looked at centrally and ministers need to look at what the knock-on costs here are. There are also emotional and financial costs for patients. Even if you are told that an abnormality is irrelevant it is still going to cause fear, tension and stress. I think the government needs to consider best practice guidelines but it should also be prepared to regulate, especially if we continue to find the NHS is being exposed to considerable financial risk by unregulated agencies.
“The first thing that needs to be done is to gather data to create a clearer picture of what the real situation is and the real cost to the NHS. Only then can government start to make a decision on the best way to move forward.”
The US perspective
The US leads the way in the use of computed tomography (CT) and magnetic resonance imaging (MRI) as diagnostic tools. The country’s healthcare system promotes investment in these expensive devices at levels most other developed nations cannot hope to achieve. But questions have been raised for a number of years about the benefits of carrying out such procedures.
The risks associated with the radiation from CT scans – equivalent to 200 chest X-rays – are well documented. According to the most recent figures, an estimated 72 million computed CT scans were performed in the US in 2007 – up from three million in 1980 – and estimates compiled by the Archives of Internal Medicine suggest that could translate into 29,000 Americans developing cancers as a direct result. Research for the New England Journal of Medicine suggests that one in ten US citizens is scanned ever year.
Rita Redberg is a cardiology and women’s health professor at UCSF and was appointed editor of the Archives in February 2009. Having long worked to bring outcome assessment and technology evaluation into mainstream medical dialogue, she believes there also needs to be a critical analysis of the role imaging plays in the diagnostic process in the US. In addition to the direct risk posed by radiation, the increasing resolution of scanners means they are able to pick up smaller anomalies. In trying to chase down every lead, doctors may decide to order repeat scans and invasive follow-up procedures such as biopsies.
Such over-eager approaches lead to inefficient use of resources and expose patients to a range of risks, from anxiety to complications arising form unnecessary surgery. Redberg does not deny that the technologies have an important role to play in healthcare but she sees a need for further research.
“We really don’t know what the benefits are of a lot of the additional imaging,” she explains. “We know its use has increased exponentially in the last decade and there isn’t any data to support that it’s helping patients. We’ve gotten very advanced in medical imaging but we need good data that these tests lead to better patient care.”
Despite the question marks surrounding the benefits of scans, Redberg believes that patients are seduced by the powerful technology and high-quality images it can produce of the previously unknowable world beneath their skin. But, she argues, so are doctors. Clinicians have an additional motivation: scanning early on can protect them from litigation further down the line.
“Right now, medical and general culture is focused on a fascination with technology and a belief that new gizmos must be a superior way of doing things,” she says. In the face of these pressures, a number of plans have been proposed as to how imaging can be made safer.
Interviewed by Journal Watch, a website published by NEJM, Smith-Bindman suggested that dosages of radiation ought to be recorded, as is the case for drugs. This protects patients directly from overexposure and helps drive dosages down across the board. In Canada, the UK and other European states, keeping records has helped to demonstrate that clear images can be achieved with lower-powered scans. The American College of Radiologists supports a similar concept, calling for use of ‘as low as reasonably achievable’ dosages.
Although guidelines could have an important role to play, a greater understanding of the risks posed by scanning will be needed to convince patients that imaging is not always the right option. In June, Redberg launched a series in the Archives called Less is More aiming to provide evidence of the overuse of healthcare services and develop strategies for containing it. She appointed her UCSF colleague Deborah Grady to head up the section.
In their introductory essay, the doctors noted that areas of the US with lower levels of provision perform better on certain indicators of health. “Almost all tests, imaging procedures, drugs, surgery and preventive interventions have some risk of adverse effects,” they explained. “In some cases, these harms have been proven to outweigh benefits.”
To ground the section in individual experiences, Grady and Redberg are calling for clinicians to submit case studies where excessive care has caused harm. Redberg says that feedback on the section has been positive so far but she knows that changing professional culture and patient attitudes will take time.
Although it is less powerful than the National Institute for Health and Clinical Excellence (NICE), the US Centers for Medicare and Medicaid Services (CMS) can exert some influence by issuing decisions on how doctors will be reimbursed by the two federal schemes. Since 2000, consideration of whether treatment is ‘reasonable and necessary’ has been part of its decision-making. It decided not to cover CT colonography as evidence was based on populations younger than that enrolled in Medicare, but it does pay out for coronary CT angiography despite similar utility issues.
However, the CMS seems to be paying greater attention to evidence-based decision-making and it should be strengthened by the Patient Protection and Affordable Care Act or ‘Obamacare’. The US president appointed Don Berwick, CEO of the Institute for Healthcare Improvement, to head up the body during the summer Congressional recess, circumventing the need for him to be approved by the Senate and filling a position that had been vacant since 2006.
While Berwick is a respected physician, Republicans were concerned by comments he made apparently favouring rationing and had blocked his appointment. For Redberg, the bitterness of the debate demonstrates a need to focus on the quality of care, an element that has often been overshadowed. “We’re not talking about costs at all,” she says. “There’s a tremendous fear of rationing but we’re trying to educate people that every time a doctor doesn’t recommend a test, there could be a good reason for it.
“It’s an aspect of the patient-centred decision-making the Institute of Medicine has been urging. Part of the informed consent has to be a real conversation about what the benefits and risks associated with testing are. We’re not always doing that all the time or as completely as we could be.”
While the logic of Redberg’s position is clear, in reality it is not as simple to disentangle cost and quality. The US system offers financial incentives for doctors to sign off on tests, as they are paid on a fee-for-service basis. This makes ordering a scan easier and more profitable than spending extended sessions discussing the alternatives with patients.
For the last two years, the CMS has been working to reduce the rates at which MRI and CT scans are reimbursed by announcing new Medicare Physician Fee Schedules. The fee is calculated largely on the basis of an equipment utilisation rate and the assumption is that the more a device is used, the lower the rate of payment need be. In 2009 the organisation announced the assumed utilisation rate would jump from 50% to 90%. After the evidence for this change was questioned, the 2010 guidelines propose revising the figure to 75%. In addition, the deduction for multiple procedures in the same session was upped from 25% to 50%.
In order to promote investment in scanners and help defray their cost, they were granted an exemption from rules prohibiting doctors from referring patients to facilities in which they have a financial stake. The 2010 update would oblige clinicians recommending a scan to supply patients with a list of alternative providers.
Figures from the Organisation for Economic Co-operation and Development (OECD) suggest that utilisation rates are not the critical issue. While the US scans at twice the average rate of members, it also has almost double the number of devices per capita. “There’s always inertia: people like to keep doing things they’re familiar with,” Redberg says. “With CT scans, once you’ve invested $1m in a device you are consciously or unconsciously going to want to keep using it.”
For that reason, Redberg returns to the need for a cultural shift. As a young doctor she studied at the London School of Economics and believes Europeans have a drastically different approach to healthcare and its associated costs.
“We have a disconnect between people not wanting to pay more for health insurance and thinking that everybody has to have every test,” she explains. “People don’t see that there’s a connection. We have a bigger problem in the US of not getting good value for the dollars we’re spending.”
The challenge is finding ways of restricting use of imaging to situations where it is appropriate but still providing practices and hospitals with a reason to invest in costly devices. One measure that could help would be for the CMS to offer different rates of reimbursement for different population groups, injecting a measure of utility into the existing setup.
“Most of the changes I see now are to do with cuts,” Redberg says. “They’re not really looking more closely at a tiered structure, which would be more challenging. For someone who has had a heart attack we know a scan might be of benefit. But for someone who is asymptomatic, where there’s no data to show one would be beneficial, we could have a different reimbursement structure.”
Redberg concludes that the US system is unsustainable. “People have to realise we need this change and that it will be better for patients,” she says. “Technology is great when it is used for the right person at the right time and there are ways to go down a road of using it appropriately.”
The debate about how to use imaging involves many parties and interests, and poses significant questions about the nature of American healthcare. Despite the challenges, the increased focus on the use of scanning has already opened up a more critical dialogue in the profession and one that could lead to a better balance between access and quality.