If you make a list of risks in order of how many people they kill each year, then list them in order of how upsetting they are to the general public, the two lists will be very different. There are risks that kill a lot of people without upsetting many-not just flu but food poisoning, smoking, overeating, not exercising, etc. And there are risks that upset a lot of people without killing very many.
Both problems frustrate risk experts and make them irritated with the public for being afraid of the "wrong risks." Risk communication experts can't completely cure this mismatch, but we can help the experts understand why the public so often seems to get it "wrong."
The core problem is definition. To the experts, risk means expected annual mortality (or morbidity). To the public, risk means much more than that. Let's redefine terms: Call the death rate (what the experts mean by risk) "hazard." Gather together all the other factors that make people frightened, angry, or otherwise upset about a risk and label them, collectively, "outrage." Risk = Hazard + Outrage. The public pays too little attention to hazard; the experts pay absolutely no attention to outrage. Not surprisingly, the two groups rank risks differently. Risk perception scholars have identified more than 20 "outrage factors." Here are some of the main ones:
A voluntary risk is much more acceptable to people than a coerced risk, because it generates no outrage. Consider the difference between getting pushed down a mountain on slippery sticks and deciding to go skiing.
Almost everybody feels safer driving than riding in the passenger seat. When prevention and mitigation are in the individual's hands, the risk (though not the hazard) is much lower than when they are in the hands of a government agency.
People who must endure greater risks than their neighbors, without access to greater benefits, are naturally outraged-especially if the differences are grounded in politics, poverty, or race. An unfair risk is a big risk. The same is true of countries that are forced to endure risks that other countries don't have to bear.
In a high-tech world, people often doubt their own ability to distinguish dangerous risks from insignificant ones. But we feel confident that we can tell trustworthy sources from those who distort or withhold information. So we use trust, credibility, and candor as stand-ins for hazard. Why "buy" a risk assessment from someone you wouldn't buy a used car from?
Does the corporation or government agency that imposes the risk or tells you it's trivial seem concerned, or arrogant? Does it tell the community what's going on before decisions are made? Does it listen and respond to community concerns?
Some risks aren't just harmful; they're evil-and they remain evil even when they're not especially harmful. Talking about risk-benefit or risk-cost tradeoffs sounds very callous when the risk is morally relevant. Imagine a police chief insisting that an occasional child molester is an "acceptable risk."
Exotic, high-tech facilities provoke more outrage than familiar risks (your home, your car, your pot belly, the annual winter flu season).
A memorable accident (Bhopal or Chernobyl, for example) can make some risks easy to imagine for decades-and that in turn makes those particular risks a bigger source of outrage and thus more risky as we have defined the term. A potent symbol can do the same thing: a drum of some chemical or, better yet, a leaking drum of chemical wastes.
Some illnesses are more dreaded than others; compare AIDS and cancer with, say, emphysema. The long latency of most cancers and the undetectability of most carcinogens add to the dread.
Diffusion in time and space
Hazard A kills 50 anonymous people a year across the country. Hazard B has one chance in 10 of wiping out a neighborhood of 5,000 people sometime in the next decade. Risk assessment tells us the two have the same expected annual mortality: 50. "Outrage assessment" tells us A is probably acceptable and B is certainly not. Catastrophic risks provoke a level of outrage that chronic risks just can't arouse.
These outrage factors are not distortions in the public's perception of risk; they are intrinsic parts of what we mean by risk. Since the public responds more to outrage than to hazard, risk managers must try to get people more outraged about serious hazards by appealing to outrage factors like the ones listed. Successful campaigns against drunk driving and passive smoking are two of many examples of raising public concern about serious hazards by feeding the outrage. Similarly, to decrease public concern about modest hazards, risk managers must work to diminish the outrage. When people are treated with honesty and respect for their right to make their own decisions, they are a lot less likely to overestimate small hazards.
There is a peculiar paradox here. Risk experts often resist the pressure to consider outrage when making risk management decisions, or even risk communication decisions. They disparage the "irrational" public and insist that "sound science" should wholly determine what they do and what they say. But we have decades of sound science indicating that voluntariness, control, fairness, and the rest are important components of people's definition of risk. When a risk manager continues to ignore these factors-and continues to be surprised by the public's response-it is worth asking just whose behavior is irrational.