Wanna Know How Politics Makes Us All Dumber?

There’s a basic assumption that lies at the base of politics in the U.S. It hides just under the surface of every speech, editorial, article, essay, blog and talking-head “news” program. It’s assumed to be the very core of our Constitution.dumb

It’s the belief that most of our bitter political battles are simply based on us being better informed than them. I doesn’t matter whether it’s about climate change, or economics, or whether government is good or bad. The thinking process is such: if those blind fools just knew what I know there wouldn’t be all this damn fighting.

It’s a compelling idea because it assumes that our friends, relatives, neighbors, etc., aren’t wrong so much as they’re misguided, or ignorant, or just haven’t bothered to look at the facts, or worse–misled by villains from the “other” party.

It assumes that our debates are manageable if everyone would just be reasonable. This attitude is predominantly widespread in Washington, where partisans devote enormous amounts of time, energy and money persuading each other that there’s really a right answer to the admittedly difficult questions in our politics — and that “we” get it, but “they” don’t.

But this compelling idea isn’t just wrong. It’s bass ackwards. Brand-new, computer-aided research shows that the more information supporters get, the deeper their disagreements become.

I can practically hear the groans already. “Now he’s (me) going to tell me (you) that he (me) knows why everything I (you) believe is wrong!”

WRONG!

I’m going to tell why the vast majority of us (us) can’t see the forest for the trees.

It’s for the same reason most of our eyes glaze over when we come across the ever-present eye-puzzles in online newspapers, facebook, or Twitter. You’ve seen them. The ones that ask, “How many animals do you see in the picture below?” That sort of thing.

Yale Law professor Dan Kahan — working with coauthors Ellen Peters, Erica Cantrell Dawson, and Paul Slovic — started out to test a question that incessantly puzzles scientists: why isn’t good evidence more effective in resolving political debates? For instance, why doesn’t the mounting proof that climate change is a real threat persuade skeptics? Haven’t they seen the proof with their own eyes?

Kahan and his colleagues wrote a paper which says the problem is that the average U.S. citizen doesn’t know enough about science to judge the debate. It’s a version of the compelling idea I talked about above: a smarter, better educated public wouldn’t have problems reading the science and accepting it’s the scientific conclusion on climate change.

But after time, effort, and a whole lot of “AHA” moments, Kahan and his colleagues discovered an altogether different theory. For the sake of this writing let’s call it the Science Comprehension Theory.

Perhaps people aren’t really held back by a lack of knowledge. Heck, the vast majority of us don’t typically doubt the findings of meteorologists who tell us when it’s going to get hotter or colder or it will rain; or astrophysicists, when they report on the existence of other galaxies.

Kahan et al, posited that there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. (Sounds a whole lot like politicians and partisan bickering, yes)? Perhaps our reasoning has become for purposes other than finding the truth — purposes like increasing their readership, or impressing peers, or improving their standing.

If this theory proved to be true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the debaters are better equipped to argue for their own side.

Kahan and his team came up with an innovative way to test which theory was right. They took 1,000 Americans, charted their political views, and then gave them a standard test used for assessing math skills. Then they presented them with a brainteaser.

In its first form, it looked like this:

  1. Medical researchers have developed a new cream for treating skin rashes. New treatments often work but sometimes make rashes worse. Even when treatments don’t work, skin rashes sometimes get better and sometimes get worse on their own. As a result, it is necessary to test any new treatment in an experiment to see whether it makes the skin condition of those who use it better or worse than if they had not used it.
  2. Researchers have conducted an experiment on patients with skin rashes. In the experiment, one group of patients used the new cream for two weeks, and a second group did not use the new cream.
  3. In each group, the number of people whose skin condition got better and the number whose condition got worse are recorded in the table below. Because patients do not always complete studies, the total number of patients in each two groups is not exactly the same, but this does not prevent assessment of the results.

Please indicate whether the experiment shows that using the new cream is likely to make the skin condition better or worse.

math-problem

What result does the study support?

  • People who used the skin cream were more likely to get better than those who didn’t.
  • People who used the skin cream were more likely to get worse than those who didn’t.

 

Don’t fret. It’s a question that is meant to exploit a common mental shortcut. A glance at the numbers leaves most people with the impression that the skin cream improved the rash. After all, more than twice as many people who used the skin cream saw their rash improve. But if you actually calculate the ratios the truth is just the opposite: about 25 percent of the people who used the skin cream saw their rashes worsen, compared to only about 16 percent of the people who didn’t use the skin cream.

This kind of question is used in social science experiments to test individual’s abilities to slow down and consider the evidence displayed before them. It forces participants to quash their instinct to go with what looks right and instead do the difficult mental work of figuring out what is right.

In Kahan’s sample, most participants failed. The same was true whether the participant considered themselves as liberals or conservatives. The exceptions were the people who had shown themselves unusually good at mathematics: they were the only ones who had a tendency to solve the problem.

These results support the Science Comprehension Theory: the better subjects were at math, the more likely they were to stop, work through the evidence, and find the right answer.

But Kahan and his colleagues took this theory a step further. They drafted a politicized version of the problem. This version used the same numbers as the skin-cream question, but instead of being about skin creams, the narrative set-up focused on a proposal to ban people from carrying concealed handguns in public. The same simple chart like the one used for the skin cream now compared crime data in the cities that banned handguns against crime data in the cities that didn’t. In some cases, the numbers, properly calculated, showed that the ban had worked to cut crime. In others, the numbers showed it had failed.

But when presented with this politicized version something totally unexpected happened: the participants that were good at mathematics failed to solve the problem as well as they did on the skin cream test.

It was basic ideology that drove the accuracy of the answers. So liberals were extremely good at solving the problem when doing so proved that gun-control legislation reduced crime. But when presented with the version of the problem that suggested gun control had failed, their math skills completely shut down. No matter how good they were at math they got the problem wrong . Conservatives exhibited the same pattern — only in reverse of course.

Being better at math didn’t simply fail to help like-minded partisans zero in on the right answer. It actually drove them further apart!

Devotees with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Devotees with strong math skills were 45 percentage points likelier to get the answer right only when it fit their ideology. The smarter the person is, the dumber political preferences can make them.

Consider how completely insane that is: being better at math made political partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts. People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.

The skin-rash experiment wasn’t the only time Kahan had shown that partisanship has a way of short-circuiting intelligence. In another study, he tested people’s scientific literacy alongside their ideology and then asked about the risks posed by climate change.

If people needed to know more about science to fully appreciate the dangers of a warming climate, then their concern should’ve paralleled their scientific acumen. But here, too, the opposite was true: among the participants already skeptical of climate change, scientific savvy made them more skeptical of climate change.

Why? According to Kahan’s study:

“Individuals subconsciously resist factual information that threatens their defining values.”

To anyone who’s ever read the work of a serious climate change denier, this conclusion should make perfect sense. Most such works are jam-packed with facts, figures, graphs, charts, etc. Naturally, the preponderance of the “data” is erroneous, but that’s kind of the point. It feels convincing. Perception beats facts every day of the week and twice on Sunday. More information, in this context, doesn’t help skeptics discover the best evidence. Instead, it sends them scurrying for evidence that seems to prove them right. And in the age of the internet, there are plenty of self-proclaimed experts to back them up.

Going the extra mile, Kahan and his coauthors even gave out sample biographies of highly accomplished scientists alongside a summary of the results of their research. Then they asked the participants in the study whether the scientist was indeed an expert on the issue. It turned out that people’s actual definition of “expert” is “a credentialed person who agrees with me.”

People who tended to worry about climate change anyway, were 72 percent more likely to agree that the researcher was a genuine expert. When the same researcher with the same credentials was attached to results that cast doubt on the dangers of global warming, people who tended to dismiss climate change were 54 percent more likely to see the researcher as an expert.

Kahan is quick to note that, most of the time, people are perfectly capable of being convinced by the best evidence. There’s a lot of disagreement about climate change and gun control, for instance, but almost none over whether antibiotics work, or whether the H1N1 flu is a problem, or whether heavy drinking impairs people’s ability to drive. Our reasoning becomes rationalizing only when we’re addressing questions where the answers could our political leanings.

Maybe what we need are pinstriped referees to cue up the instant replay–or judges in distinguished black robes. Unfortunately, when it comes to politics, apparently their propensity to see only what they want to see is just as prevalent as it is with the rest of us.

Harvey A. Gold