Sunday, November 06, 2016

Why Incompetents Think They're Awesome. We Almost Have Riots. Behind our Anxiety. - olazin@g.ucla.edu - Google Apps for UCLA Mail

Why Incompetents Think They're Awesome. We Almost Have Riots. Behind our Anxiety. - olazin@g.ucla.edu - Google Apps for UCLA Mail



SCIENTIFIC METHOD —

Revisiting why incompetents think they’re awesome

Dunning-Kruger study today: The uninformed aren't as doomed as the Web suggests.

Aurich Lawson
Another election day in the US is rapidly approaching (Tuesday, Nov. 8—mark your calendars!). So for no reason in particular, we're resurfacing our close examination of the Dunning-Kruger effect from May 25, 2012.
In 1999 a pair of researchers published a paper called "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments (PDF)." David Dunning and Justin Kruger (both at Cornell University's Department of Psychology at the time) conducted a series of four studies showing that, in certain cases, people who are very bad at something think they are actually pretty good. They showed that to assess your own expertise at something, you need to have a certain amount of expertise already.
Remember the 2008 election campaign? The financial markets were going crazy, and banks that were "too big to fail" were bailed out by the government. Smug EU officials proclaimed that all was well within the EU—even while they were bailing out a number of financial institutions. Fast forward to 2012, and the EU is looking at hard times. Greece can't pay its debt. Italy can, but the markets don't trust it to be able to. Spain and Portugal are teetering around like toddlers just waiting for the market to give them one good push. Members of the public are behaving like teenagers, screaming "F**k you," while flipping the bird. The markets are reacting like drunk parents, and the resulting bruises are going to take a while to heal.
In all of this, uninformed idiots blame the Greeks for being lazy, the Germans for being too strict, and everyone but themselves. Newspapers, blogs, and television are filled with wise commentary hailing the return of the gold standard, the breakup of the Euro, or any number of sensible and not-so-sensible ideas. How are we to parse all this information? Do any of these people know what they are talking about? And if anyone does, how can we know which ones to listen to? The research of Dunning and Kruger may well tell us there is no way to figure out the answers to any of these questions. That is kind of scary.
It has been more than 10 years since Dunning and Kruger published their work. I suspect it has become required reading in psychology courses. It's also a paper that has important implications for learning and communication, so what has happened since? Have the results held up? Are they universal? And what can we do to avoid falling victim to our own inabilities?
"The paper gave voice to an observation that people make about their peers, but that they don’t know how to express," Dunning said.
This paper has become a cult classic. It is well-written—humor interspersed with robust data, and conclusions that are discussed in a thorough and accessible way. I wondered if Dunning knew that this paper would become such a classic, and when I spoke with him he responded quite to the contrary. "I frankly thought the paper would never be published," Dunning said. "It really doesn’t fit the usual structure of a modern-day research psychology finding. A wise editor who got it and good reviewers showed me wrong there. I am struck just with how long and how much this idea has gone viral in so many areas."
Clearly, the paper struck a chord with many people outside of the field of psychology. "I presume the paper gave voice to an observation that people make about their peers but that they don’t know how to express," Dunning responded. If you have not read the paper already, I recommend doing so.
Unfortunately, in those places ruled by the smug and complacent, a classic paper has become a weapon. The findings of Dunning and Kruger are being reduced to "Stupid people are so stupid that they don't know they are stupid." Rather bluntly, Dunning himself said, "The presence of the Dunning-Kruger effect, as it’s been come to be called, is that one should pause to worry about one’s own certainty, not the certainty of others." And that humorously suggests the Dunning-Kruger effect is now a candidate to become a second Godwin's law.
Like Dunning, I do not take such a dim view of humanity. In fact, Dunning-Kruger andfollow-up papers give us cause for hope. They show that people are not usually irredeemably stupid. You can teach people to accurately self-evaluate—though, in their specific examples, this also involved teaching them the very skill they were trying to evaluate.

Context is everything

It is important to realize that the Dunning-Kruger paper was not such a shocking finding. It was, for instance, already known that seemingly everyone evaluates themselves as above average in everything. Are you a better driver than average? Certainly am. How do you rate your ability at math? Oh, a little better than average. How about mountain climbing? Well, I've climbed the local hill a couple of times. I bet Kilimanjaro can't be much more difficult.
A large pile of research on various groups of people, covering various skill sets, indicates that in the face of all evidence, humans are irredeemably optimistic about their own abilities. That is, by itself, not such a bad thing. The ugly side shows up when we also realize that the norm must be maintained. Studies show that we do this by considering that everyone else is much worse. Being clueless about your own abilities is one thing. Misjudging other's abilities is relatively more serious.
JUMP TO ENDPAGE 1 OF 3

It's not about stupidity, stupid

It's worth spending a moment to illustrate how subtle Dunning and Kruger's results really are. And what better example to use than me? I am an immigrant. I grew up speaking English—though, perhaps not a brand of English you would find recognizable. As an adult, I moved from New Zealand to Holland and have spent the last five years struggling to learn a new language. I am, by the very definition of Dunning and Kruger's paper, incompetent.
An unsophisticated reading of Dunning and Kruger's results would suggest that I would rate myself highly. In fact, I would show all the signs of incompetence: I would perform poorly on basic Dutch grammar and vocabulary tests, and I would fail to correctly evaluate others in their usage. But if you think that I estimate my Dutch language skills in anything but the bottom percentile, you would be sadly mistaken. Clearly I know I am incompetent and am aware of it.
Dunning and Kruger's results don't apply to my situation, though, because every day I am made aware of just how bad my Dutch is. I have to repeat myself, I have to ask others to repeat themselves. I take inordinate amounts of time digesting the simplest letters from the Dutch government. Everywhere around me, I find signs that my Dutch is terrible.
It's easy to rate your language skills as poor when given constant reminders.
It's easy to rate your language skills as poor when given constant reminders.
A more correct comparison would be to group me with a bunch of other expats who are also learning Dutch as a second language. Some of us will be better than others, and the very worst of us would have difficultly evaluating where we stand in that group. Now, I think that my Dutch is OK for a foreigner. But I really doubt that I could accurately evaluate my position within an expat group accurately. In that light, I am most certain that I think my Dutch is better than reality would suggest.
When you consider that I am unlikely to be able to pick grammatical errors out of a Dutch sentence, it is impossible for me to evaluate my own, or anyone else's, performance. I simply do not have the skills to do so. And, since I can't read or hear the errors of others, I cannot accurately place myself in the hierarchy of competence. Couple that with the fact that I think I am not stupid, and I am likely to severely overestimate my abilities.
At this level, the results seem to imply that if you can't do, you can't recognize the difference between doing well and doing poorly. The example in the Dunning-Kruger paper is that of a basketball coach. Consider the average basketball coach—a pear-shaped middle-aged gentleman, topping out at about 5-foot-5 and blessed with the countenance of a happy but slightly old and dried-out apple. Clearly, the coach isn't going to outplay any of his players. He can't do. Yet, the lack of physical ability doesn't say anything about his ability to tell if his players are playing well or playing poorly. It says nothing about his ability to teach his players new basketball skills. It also doesn't prevent him from evaluating his own performance as a coach.
There is a subtlety here, though. Coaching skills are not the same as playing skills. So the Dunning-Kruger paper applies to the coaches' evaluation of their own ability to coach. Now, coaches get a huge amount of feedback. Wins and losses are the obvious and most important, but results would not capture the effectiveness of a coach working with a squad of inexperienced players. These indicators might include player motivation, skills development, and the team environment. Clearly, these are not the same criteria with which you would evaluate. And we all know of examples of coaches who fail dismally with one squad of players yet succeed with another.
The point being that it is not just self-evaluation that is difficult. Evaluation of a skill set is, quite simply, very difficult to get right.
What this study outlines most starkly is what happens when someone is not just bad at something, but they are bad and do not possess the tools to assess their own performance. These are two different skills: action and self-assessment. Sometimes the two skill sets overlap so well that you have to be good at something to accurately know that you are good at it. In other cases, the two skill sets don't overlap. In which case, maybe Dunning and Kruger need not apply.

Well, not so fast

There have been a few studies that have worked on the relationship between cognition and metacognition (e.g., self-evaluation) since then. An example of these, byAmes and Kammrath, examined the relationship between people's estimate of their ability to read people and their actual ability to read people. In this case, once again, those who consistently fail to read people actually thought they were pretty good at it, while those who could accurately read people underestimated their performance.
In searching for the source of the subject's poor estimation, the researchers zeroed in on narcissism as one of the primary correlates. The higher people ranked on a narcissism test, the more likely they were to estimate their abilities to read people highly. The significance of this is not entirely clear to me. Although narcissism correlated to self-evaluation, it did not correlate to actual performance.
In other words, narcissists think they are brilliant. Who knew? In addition, the researchers found that extroverts were also more likely to overestimate their abilities, while self-esteem and gender were not correlated. And, importantly, none of these seemed to be correlated to actual performance.
But this also throws into confusion my own conclusion (also a conclusion that Dunning and Kruger hinted at): if the skills required to self-evaluate and the skill under evaluation do not overlap, then performance in one should not predict performance in the other. But I don't believe that the evaluation of your ability to read people and your actual ability to read people overlap significantly. So what does that imply?
It seems to suggest that you really do need to have some skill in the area to evaluate what constitutes good performance. Perhaps the example of the basketball coach, where the skills are not cognitive, was a bad example. These are examples of questions that are, to my limited knowledge, not yet answered.
JUMP TO ENDPAGE 2 OF 3

Culture complicates things

In the US, extroverts are loved. I would say that in Europe, extroverts are not as highly valued, but you do have to have a certain degree of self-confidence in your abilities to get by. This is not universally true. Although we in the West admire students who exhibit self-confidence, in parts of Asia, humility and hard work are more highly regarded. It is expected that a student will be incompetent. But it is also expected that a student will work hard to become competent.
This is not a comment on which should be regarded as better, but a comment on the cultural environment in which we operate. A student who fails in the West—and this is a gross generalization, since the "West" is not a homogeneous culture—is more likely to move on to a different topic. We encourage our students to find the thing that they are naturally good at. In other cultures, failure to succeed is an invitation to try harder. Even though it might be acknowledged that you will never be good at something, it is important to be seen to be trying to master the skill.
The result is that self-evaluations vary widely by culture and exhibit systematic biases. In other words, no one seems to get it right. We all fail in different ways.

Education and work

David Dunning
David Dunning
One of the scary things about these findings is that we often use self-evaluation in education and work. What this tells us is that, for both the best performers and the worst performers—these are the people you really want to find out about—you are not going to get an accurate impression.
In the case of hard-skills, such as logic and reasoning, these can be evaluated by objective tests. It is easy enough to separate the brilliant from the abject failures. But what about courses like English literature, or skills like management? Not only is it hard to define what makes a good manager a good manager, it is hard to define a scale on which to evaluate the qualities that make a good manager.
This is where Ames and Kammrath come back into play. Things like management involve reading people. But this study shows that you have to be good at reading people in order to evaluate if you are good at reading people. What's more, evaluating general performance is more about reading people and trying to figure out what they are capable of.
The results of research performed by Dunning, Kruger, Ames, and Kammrath tell us something that every one of us has expressed at some time or another. The incompetent are readily able to escape detection by those who count. At its most cynical—though it is also a logically inescapable conclusion—this is best expressed by the Peter Principle: people are inevitably promoted to a position that is just beyond their level of competence. If we accept the Peter Principle, then we must also accept the consequences of that. People who evaluate the performance of their underlings are likely to be incapable of such an evaluation.

What about science communication?

Science communication is a hobby of mine, and the Dunning and Kruger paper has implications for this field. Indeed, the implications of the findings are far more insidious than the realization that, yes, we all have weak points. Take for instance a common line of reasoning used in science communication. When faced with a topic I don't understand, "I recognize that I am not an expert, so I rely on the accumulated wisdom of experts." Furthermore, a common suggestion in critical thinking is that when you are presented with scientific claims, you examine the evidence from a range of experts to test the claim. But just think about that for a moment.
"Our own recent work shows that, yes, you need experts to spot experts," Dunning noted. "That said, spotting an expert outside of one’s field is a task one can become better at."
First, you have to pick an expert. OK, how do I, as a non-scientist, tell the difference between Michael Behe and Richard Dawkins? How do I tell the difference between a scientific society, such as NOAA, and something like the Heartland Institute? In short, to pick a good expert on a given topic, I need some expertise on the topic. The Internet can help with this, since a large number of biologists would tell you that Richard Dawkins is a reliable source of information on evolution, and very few would point you in the direction of Michael Behe. In other words, in aggregate, is the Internet always right? Umm, yeah, I think I'll take a pass on that.
This is a topic that is of interest to Dunning. "Our own recent work shows that, yes, you need experts to spot experts," he said. "Everyone can spot the poor performer, but often spotting the best performers is beyond the competence of the group. That said, spotting an expert outside of one’s field is a task one can become better at. And that’s important, given just how much information, good and bad, is not available to people. For example, is the expert associated with a university (a good sign) or some 'think tank' (a bad sign)?" Again, though, this takes experience and expertise. Groups like think tanks try to give themselves the trappings of expertise in a move specifically designed to fool us into trusting their statements.
Furthermore, there seems to be an inherent misunderstanding on the part of scientists and science communicators, according to Dunning. "For example, scientists often think that telling the world a conclusion has scientific consensus settles the issue. To scientists, this makes sense. To the general public, they 'hear' that scientists must be colluding on an issue." In other words, the message poisons itself. This comes back, at least in part, to education. "They [scientists and science communicators] assume some basic knowledge (and faith) in science in the general population that, in truth, is missing," Dunning said.
In spite of this, I remain optimistic. Why? Because the Dunning-Kruger paper shows that, with training, self-evaluation accuracy improves. If you teach people logical reasoning, they become better able to evaluate their own performance in logical reasoning. The critical message is that the right feedback at the right time has an impact. Also, this is still new knowledge. I wonder how simple knowledge of Dunning-Kruger could affect people's self-assessment. If you are aware that everyone (including you and me) is likely to overestimate our abilities, does this have an influence? Dunning believes there are two key issues: first, critical thinking skills, applied to your own knowledge, as well as everything else, are vital. But, importantly, if you don't exercise critical thinking skills, they will fade, leaving you with a false impression of your own abilities.
It is also important to confront people with their own failings. "There is also some thought that perhaps we should give people experience with their overconfidence," Dunning noted. "That is, get them to make an overconfident display, and then expose it for what it is, so that people are more on guard for such an issue. For example, in some areas, people learning to drive are exposed to horrible driving conditions, but not taught how to handle them. Instead, they are given enough frightening experience that they would never think to drive in icy or snowy conditions. I would not consider this a negative approach to education. As Anatole France said, a proper education isn’t what you know, it’s being able to separate what you know from what you don’t."
An excellent example? Take this article. I am not a psychologist, nor have I taken any training in that area. I find results like these fascinating. I wonder, even as I write this, how much I have gotten wrong, misunderstood, or simply left out. Nevertheless, as imperfect as this may be, it's still worth putting out there for discussion. I think.
Journal of Personality and Social Psychology, 2003, DOI: 10.1037/0022-3514.84.1.5
Journal of Nonverbal Behavior, 2004 (PDF):SOURCE:  arstechnica.com



'via Blog this'
Post a Comment