In the 1970s, one of the authors (IC), while holidaying in the USA, broke an ankle and was treated by an orthopaedic surgeon. The surgeon put the leg in a temporary splint, recommending that the next step, once the swelling had subsided, would be a lower leg plaster cast for six weeks. On returning home a couple of days later, IC went to the local fracture clinic, where a British orthopaedic surgeon, without hesitation, dismissed this advice. Putting the leg in plaster, the surgeon said, would be wholly inappropriate. In the light of this obvious uncertainty about which treatment was better, IC asked whether he could participate in a controlled comparison to find out. The British surgeon answered that controlled trials are for people who are uncertain whether or not they are right – and that he was certain.
How can such a pronounced difference in professional opinion come about, and what is a patient to make of this? Each surgeon was certain, individually, about the correct course of action. Yet their widely divergent views clearly revealed uncertainty within the profession as a whole about the best way to treat a common fracture. Was there good evidence about which treatment was better? If so, was one or neither surgeon aware of the evidence? Or was it that nobody knew which treatment was better? Perhaps the two surgeons differed in the value they placed on particular outcomes of treatments: the American surgeon may have been more concerned about relief of pain – hence the recommendation of a plaster cast – while his British counterpart may have been more worried about the possibility of muscle wasting, which occurs when a limb is immobilised in this way. If so, why did neither surgeon ask IC which outcome mattered more to him, the patient?
There are several separate issues here. First, was there any reliable evidence comparing the two very different approaches being recommended? If so, did it show their relative effects on outcomes (reduced pain, or reduced muscle wasting, for example) that might matter to IC or to other patients, who might have different preferences to his? But what if there was no evidence providing the information needed?
Some clinicians are clear about what to do when there is no reliable evidence. For example, one doctor who specialises in caring for people with stroke has put it this way: ‘I can reassure a patient that I am an expert in stroke assessment and diagnosis, and that I can reliably interpret the brain scan, and order the correct tests. I also know from existing research that my patient will fare better if cared for in a stroke unit. However, there is one aspect of management that I and others are uncertain about, and that is whether I should be prescribing clot-busting drugs: these drugs may do more good than harm, but they may actually do more harm than good. In these circumstances I feel it is my duty to help reduce this uncertainty by explaining to my patient that I am only prepared to prescribe this treatment within the context of a carefully controlled comparison.
‘Too often in healthcare, ‘geography is destiny’. For instance, 8 per cent of the children in one community in Vermont had their tonsils removed, but in another, 70 per cent had. In Maine, the proportion of women who have had a hysterectomy by the age of 70 varies between communities from less than 20 per cent to more than 70 per cent. In Iowa, the proportion of men who had undergone prostate surgery by age 85 ranges from 15 per cent to more than 60 per cent.’
Gigerenzer G. Reckoning with risk: learning to live with uncertainty.
London: Penguin Books, 2002, p101.