Heavy fog did not keep 2,100 head of sheep from walking past a stop sign in Idaho on their way to a new pasture.
Academics and pundits often don’t know what they are talking about.
Ever wonder how financial experts could lead the world over the economic cliff?
One explanation is that so-called experts turn out to be, in many situations, a stunningly poor source of expertise. There’s evidence that what matters in making a sound forecast or decision isn’t so much knowledge or experience as good judgment - or, to be more precise, the way a person’s mind works.
More on that in a moment. First, let’s acknowledge that even very smart people allow themselves to be fooled by an apparent“expert” on occasion.
The best example of the awe that an“expert”inspires is the“Dr.Fox effect.”It’s named for a pioneering series of psychology experiments in which an actor was paid to give a meaningless presentation to professional educators.
The actor was introduced as“Dr.Myron L.Fox”(no such real person existed) and was described as an eminent authority on the application of mathematics to human behavior. He then delivered a lecture on “mathematical game theory as applied to physician education”- except that by design it had no point and was completely devoid of substance. However, it was warmly delivered and full of jokes and interesting neologisms.
Afterward, those in attendance were given questionnaires and asked to rate“Dr.Fox.”They were mostly impressed.“Excellent presentation, enjoyed listening,”wrote one. Another protested: “Too intellectual a presentation.”
A different study illustrated the unfounded confidence in“experts”another way. It found that a president who goes on television to make a case moves public opinion only negligibly, by less than a percentage point. But experts who are trotted out on television can move public opinion by more than 3 percentage points, because they seem to be reliable or impartial authorities.
But do experts actually get it right themselves?
The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book,“Expert Political Judgment,”is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.
The result? The predictions of experts were, on average, only a tiny bit better than random guesses - the equivalent of a chimpanzee throwing darts at a board.
“It made virtually no difference whether participants had doctorates, whether they were economists, political scientists, journalists or historians, whether they had policy experience or access to classified information, or whether they had logged many or few years of experience,”Mr.Tetlock wrote.
Indeed, the only consistent predictor was fame - and it was an inverse relationship. The more famous experts did worse than unknown ones. That had to do with a fault in the media.
Talent bookers for television shows and reporters tended to call up experts who provided strong, coherent points of view, who saw things in blacks and whites. People who shouted - like television pundits are prone to do.
Mr.Tetlock called experts such as these the“hedgehogs,”after a famous distinction by the late Sir Isaiah Berlin (my favorite philosopher) between hedgehogs and foxes. Hedgehogs tend to have a focused worldview, an ideological leaning, strong convictions; foxes are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance. And it turns out that while foxes don’t give great sound-bites, they are far more likely to get things right.
This was the distinction that mattered most among the forecasters, not whether they had expertise. Over all, the foxes did significantly better, both in areas they knew well and in areas they didn’t.
Other studies have confirmed the general sense that expertise is overrated. In one experiment, clinical psychologists did no better than their secretaries in their diagnoses. In another, a white rat in a maze repeatedly beat groups of Yale undergraduates in understanding the optimal way to get food dropped in the maze. The students overanalyzed and saw patterns that didn’t exist, so they were beaten by the rodent.
The marketplace of ideas for now doesn’t clear out bad pundits and bad ideas partly because there’s no accountability. We trumpet our successes and ignore failures - or else attempt to explain that the failure doesn’t count because the situation changed or that we were basically right but the timing was off.
For example, I boast about having warned in 2002 and 2003 that Iraq would be a violent mess after we invaded. But I tend to make excuses for my own incorrect forecast in early 2007 that the troop“surge”would fail.
So what about a system to evaluate us prognosticators? Professor Tetlock suggests that various foundations might try to create a“trans-ideological Consumer Reports for punditry,”monitoring and evaluating the records of various experts and pundits as a public service. I agree: Hold us accountable!
댓글 안에 당신의 성숙함도 담아 주세요.
'오늘의 한마디'는 기사에 대하여 자신의 생각을 말하고 남의 생각을 들으며 서로 다양한 의견을 나누는 공간입니다. 그러나 간혹 불건전한 내용을 올리시는 분들이 계셔서 건전한 인터넷문화 정착을 위해 아래와 같은 운영원칙을 적용합니다.
자체 모니터링을 통해 아래에 해당하는 내용이 포함된 댓글이 발견되면 예고없이 삭제 조치를 하겠습니다.
불건전한 댓글을 올리거나, 이름에 비속어 및 상대방의 불쾌감을 주는 단어를 사용, 유명인 또는 특정 일반인을 사칭하는 경우 이용에 대한 차단 제재를 받을 수 있습니다. 차단될 경우, 일주일간 댓글을 달수 없게 됩니다.
명예훼손, 개인정보 유출, 욕설 등 법률에 위반되는 댓글은 관계 법령에 의거 민형사상 처벌을 받을 수 있으니 이용에 주의를 부탁드립니다.
Close
x