« The Knowledge Technology Trap | Main | You Know You Compete on Analytics When... »

January 23, 2006

Organizational Knowledge and “Higher Modesty”

I recently had a conversation with Bob Sutton of Stanford about Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management, the new book he and Jeffrey Pfeffer have written. Among the reasons Bob gave for why leaders and managers make decisions contrary to available evidence is what he calls “confirmation bias.” That’s the tendency to notice and believe information that supports your existing beliefs and ignore or discredit information that contradicts them. James March makes a similar point when he notes how frequently managers make decisions first and ask questions later, doing analysis to “prove” that the decision makes sense.

It’s a common, powerful human behavior: most of us pay more attention to news, events, and opinions that seem to confirm our ideas than to evidence that those ideas may be wrong; most of us look for reasons to justify what we’ve already decided we want to do. And we live in a culture that values certainty more than doubt, especially in our leaders.

But the dangers are obvious. We all knows stories of leaders who confidently drive their organizations over a cliff, ignoring warnings and evidence of the dangers ahead. “SSW”—Swift, Sure, and Wrong—is an acronym used in some medical schools to describe confident, disastrous decisions in medicine. Less dramatically, organizations miss opportunities and are surprised by challenges because confirmation bias keep them from seeing important information. In a knowledge economy, being blind to essential knowledge is clearly a problem.

It is difficult but possible to do something about it. Bob Sutton cites IDEO, an industrial design firm that, he says, acts with knowledge while doubting what they know. I’m aware of one CEO who asked a knowledgeable outsider to send him an email whenever he saw him doing something stupid. We can make conscious efforts to be skeptical about confirming evidence and open to evidence that tells us we may be wrong. We can listen to the people on the fringes of our groups and organizations, rather than dismiss them because they are “different.” We can try to have a bit of what the early 20th century writer Edmund Gosse calls “higher modesty”—the willingness to question one’s own deepest beliefs that Gosse considers an essential characteristic of great scientists.

Posted by Don Cohen on January 23, 2006 05:46 PM | Permalink

Trackback Pings

TrackBack URL for this entry:

Listed below are links to weblogs that reference Organizational Knowledge and “Higher Modesty”:

» Organizational Knowledge from elearnspace
A simple, axiomatic, but profound thought that shapes our organizations and institutions - Organizational knowledge: "Its a common, powerful human behavior: most of us pay more attention to news, events, and opinions that seem to confirm our ideas tha... [Read More]

Tracked on January 24, 2006 03:41 PM


Somehow I missed Bob Sutton's response in March; my RSS feed did not show any changes here. I apologize for the delay.
I like a challenge, but I reject the terms in this case. The suggestion is that what works in peer reviewed journals is the same as what works in business. If it did all the time then academics would be a great deal more successful then they are in the business world. As one who has lived on both sides of that fence, I sympathize with seeking ways of inquiring into the effectiveness of theories and practices, but refuse to be circumscribed by the universe of peer-reviewed journals.

Instead I'll cite work by Tom Gilbert, Harold Stolovich, and Dick Grote, which has inspired our own work in this area. This may not pass muster for Bob, and I appreciate that if he is waiting for the peer-reviewed study he may wait a long time. I oversee a performance management system that is forced differentiation (with an exception I'll note below) and norm-referenced by job family. We're not likely to devote the time to such a study and I daresay it's for similar reasons that Exxon or GE would not. But Sutton and Pfeffer in the cited article relied in some cases on anecdote to make their cases.

My argument that Pfeffer is biased was based upon the fact that long before the book was written -- and I assume Bob did his research -- Pfeffer was against such systems. I heard him speak in Miami in early 2003 and he seemed to already possess that conclusion. I otherwise admire Pfeffer: I frequently cite him, buy copies of his books to give away to our leadership development participants, and will always follow his work. But differentiation seems to be driving desired results at a number of companies -- when done right. (We subject our results to a variety of analyses including EEOC.) It isn't premised upon a statistical finding that there is a regular distribution of performance but that there is a need to reward extraordinary performance. Races in the Olympics may involve all good performers and offer a reward of a gold medal to the winner. There is a reward for having most important results achieved in the most efficient and effective manner.

Competition in this instance does need to take into account HOW the person achieved results: those who scorched others on their way suffer, those who made others a success rise in the estimation of others. In our case we add a gainsharing component that gives the same % to ALL employees in order to balance comp. We also offer a very strong overall value proposition through various benefits programs. We also do NOT include obviously poor performers in the mix. Our system is criterion-based when it comes to that fortunately very small segment. Managers must move through a decision tree and demonstrate that neither a faulty system nor their own inability to deliver feedback are the reason for relegating a person to this category.

Therefore, there is no requirement that we place a certain percentage in that low category, and we have never espoused the principle of 'up or out'.

The amount at hazard in the differentiation through merit increases is relatively small as a slice of Total Cash Compensation, but it generates a great deal of interest and attention regarding writing objectives clearly, making sure that they are measurable, and monitoring the results to see whether they happened and had the desired impact. (Consistent with theories of Vroom and others regarding motivation and compensation.)

I appreciate the comments back and would be glad like the GE managers to have a good argument with Bob at any time -- preferably over a nice cup of tea or good glass of wine. Maybe I've got it wrong and I am always willing to be enlightened.

Posted by: T.J. Elliott | May 16, 2006 10:09 PM

The primary problem with forced ranking is that it can not pass statistical or mathematical muster. A normal distribution assumes randomness in the population. I presume that GE's high flying, highly developed products from Crotonville are more skilled interviewers than one who simply flips the coin?? Oh yes!

Robert Cenek
www.cenekreport.com - trends and research in the world of work

Posted by: Robert Cenek | April 8, 2006 10:44 PM

Someone asked what the difference is between confirmation bias and cognitive dissonance. There is actually a pretty big difference. Dissonance is, as is implied, an older term that means that there is a gap between two thoughts, or what one believes and what one is doing, such as "I am an honest person" and "I cheat on my taxes." People are motivated to make such differences go away by changing their behavior or thoughts.

Confirmation bias happens when you believe something (say, the Bush administration believing that Iraq had weapons of mass destruction) and you seek out, pay attention to, believe, and recall only facts that support your position. In fact, the Economist did a lovely job of explaining how the Bush administration fell victim to this bias, asserting that when people brought evidence to them indicating there were weapons "they had to jump over a matchbox" but when they brought information there were no weapons "they had to climb a mountain."

To complete the circle, they now have a cognitive disonnance problem, resolving two opposing things "we believed and said there were weapons" and "there is no evidence of weapons."

OK? One is gap between opposing thoughts, they other is the human tendency to believe a thought regardless of the actual facts.

Posted by: Bob Sutton | March 12, 2006 02:50 AM

I am writing in response to the comment that Pfeffer is biased against forced ranking. I am the co-author of Hard Facts, and went through and did extensive searches of all articles published in peer-reviewed journals. I can find case studies -- GE for example -- that seem to support the virtues of forced ranking, and its key assumption, that creating a big range of status and pay between top, middle, and weak performers, is best. BUT I can't find a single article in a peer reviewed journal that supports forced ranking, and every single existing peer reviewed article shows that -- as long as some cooperation is involved in the work -- that having more compressed pay is associated with better performance -- in top management teams, manufacturing organizations, diverse samples of middle managers, academic departments, and even professional basebal teams. Pfeffer is biased -- by the weight of the evidence. My challenge to TJ Elliot is this: Find me one rigorous study -- one that has passed the peer review process -- to support forced ranking and especially having big pay differences between the top and the bottom -- in a setting where some interdepenence is required. Rank and Yank and big spreads in pay works with orange pickers and with truck drivers, but not most places. In the meantime, the Hard Facts support Pfeffer (and Deming's) bias.

I also had an interesting argument with senior GE executives about this, and there is one twist they use that most companies that use forced ranking miss completely -- they define a star as someone who helps everyone else succeed, not as someone who gets ahead at others expsense. I think that sounds right... but this is just a story, my bias, I suppose, the weight of the evidence still says they ought to reduce the emphasis on forced ranking. Again, find me one rigorous study to support YOUR bias. I tried for months.

Posted by: Bob Sutton | February 26, 2006 04:48 PM

I agree with the points made but believe that Pfeffer and Sutton fall prey to their own charge in the HBR excerpt.

The section on forced ranking is an example. Pfeffer (who is a brilliant speaker, writer, and thinker) has carried a bias against the system for years. I heard him speak against it in Miami and have read other examples. One person's bias is simply a theme of another's research findings, of course.

The way in which the authors frame the argument reveals the willingness to disconfirm one's theories and hypotheses. Is there any doubt expressed? Does one offer the strongest position of an opposing side?
That doesn't happen here. Forced ranking is 'no good', but the authors fail to cite data at the big companies mentioned that indicate it has a negative or positive effect. Those companies are doing every well though. Then they cite a comment by an anonymous manager, then a survey by a group that has products to use in place of forced ranking...

I think their general argument about evidence-based management is correct, and I see a clear link to KM. However, it is cautionary to see their failure to catch their own bias. And Argyris and Schön told us all of this 20 years ago anyway. We are all unlikely to seek to disconfirm our hypotheses at home or at work! ;-)

Posted by: T.J. Elliott | January 31, 2006 02:41 PM

What's the difference between the new term, “confirmation bias” and the long-used term, "cognitive dissonance"? Sounds like bias in making up a term when a perfectly usable term exists :)

Posted by: Christina Pikas | January 24, 2006 02:29 PM

Don, you're so right. What you write is exactly what I am thinking, hence I believe what you say.
- end of pun -
By the way, IDEO has a great product: IDEO method cards. About fifty cards describing various methods for creativity. http://www.ideo.com/methodcards/MethodDeck/

Posted by: Christian Hauck | January 24, 2006 05:54 AM

Bruce LaDuke wrote a book "The Knowledge Machine" and devoted a web site (www.anti-knowledge.com) on this subject. He stressed the importance of using questions, especially questions outside our current knowledge framework as a means to knowledge creation.


Posted by: KK Aw | January 23, 2006 09:24 PM

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)