I recently read Thinking, Fast and Slow by Daniel Kahneman. I haven't talked to people about a book I'm reading like this in quite some time, which is appropriate, in that Kahneman does a good job in applying his psychological findings in the way his book is written, and toward what audience:
Nisbett and Borgida summarize the results in a memorable sentence:
Subjects' unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.
So while he does include the general, he also works well with the specific. One conclusion of research is that people learn better from the specific, and:
This is a profoundly important conclusion. People who are taught surprising statistical facts about human behavior may be impressed to the point of telling their friends about what they have heard, but this does not mean that their understanding of the world has really changed. The test of learning psychology is whether your understanding of situations you encounter has changed, not whether you have learned a new fact. There is a deep gap between our thinking about statistics and our thinking about individual cases. Statistical results with a causal interpretation have a stronger effect on our thinking than noncausal information. But even compelling causal statistics will not change long-held beliefs or beliefs rooted in personal experience. On the other hand, surprising individual cases have a powerful impact and are a more effective tool for teaching psychology because the incongruity must be resolved and embedded in a causal story. That is why this book contains questions that are addressed personally to the reader. You are more likely to learn something by finding surprises in your own behavior than be hearing surprising facts about people in general.
And on his audience, Kahneman says:
Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented to critics and gossipers rather than to decision makers.
And while I'm nearly always a critic, I even became a bit of a gossip on the subject of this book, because of the fascinating collection of important findings that are all made immediately person and applicable.
The basic thesis of the book is this: people have fundamentally two kinds of thinking going on in their heads. System one is fast, intuitive, and easy. It often makes the right decision for you, but it is vulnerable to a collection of systematic deficiencies. System two is slow, deliberate, and difficult. It can make good decisions if you give it time and effort, but it is limited.
One systematic problem with system one is that when you hear that seven people were killed by sharks last year, you are more scared than you should be (or not scared at all). Two reasons:
The focusing illusion:
Nothing in life is as important as you think it is when you are thinking about it.
And one of many biases of system one resulting in a failure to get statistical thinking right:
The bias has been given several names; following Paul Slovic I will call it denominator neglect. If your attention is drawn to the winning marbles, you do not assess the number of nonwinning marbles with the same care. Vivid imagery contributes to denominator neglect, at least as I experience it.
Here winning marbles are people getting munched by sharks, and nonwinning marbles are people not so munched. Shark munching is more vivid than marbles.
Kahneman talks a lot about problems people, even statisticians, have with statistics. Like this question:
For a period of 1 year a large hospital and a small hospital each recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?
- The larger hospital
- The smaller hospital
- About the same (that is, within 5% of each other)
The answer is that the smaller hospital will vary more and so have more such days. But people don't get this question right. That's often Kahneman's conclusion. People don't get this stuff right. Here he takes this kind of thinking and goes on to us it to support this claim:
The truth is that small schools are not better on average; they are simply more variable.
Well this is a big area of debate, especially in NYC, but it is the case that people usually just ignore the issue of student variability mattering more in a smaller group of students. I didn't really see enough evidence in the text to conclude that Kahneman conclusively settled this issue, but it did give me another thing to think about when I see claims like "a larger proportion of charter schools are in the bottom 10% of all schools". We should expect effects like that, if charter schools are usually smaller than other schools, due to chance.
In many instances I immediately thought to myself, "if people just knew the math, they could work this out, and they wouldn't make these mistakes!" Part of Kahneman's point is that mistakes happen even when people do know the math, if they don't actually do it, instead relying on their "gut" (system one). But there was also one place where I wasn't sure I did know the relevant math:
Imagine an urn filled with balls, of which 2/3 are of one color and 1/3 of another. One individual has drawn 5 balls from the urn and fournd that 4 were red and 1 was white. Another individual has drawn 20 balls and found that 12 were red and 8 were white. Which of the two individuals should feel more confident that the urn contains 2/3 red balls and 1/3 white balls, rather than the opposite. What odds should each individual give?
In this problem, the correct posterior odds are 8 to 1 for the 4:1 sample and 16 to 1 for the 12:8 sample, assuming equal prior probabilities. However, most people feel that the first sample provides much stronger evidence for the hypothesis that the urn is predominantly red, because the proportion of red balls is larger in the first than in the second sample. Here again, intuitive judgements are dominated by the sample proportion and are essentially unaffected by the size of the sample, which plays a crucial role in the determination of the actual posterior odds. In addition, intuitive estimates of posterior odds are far less extreme than the correct values. The underestimation of the impact of evidence has been observed repeatedly in problems of this type. It has been labeled "conservatism."
I kind of guess this is some Bayesian thing, and maybe after a few minutes with Google I could work it out, but off the top of my head I don't know how to solve for those results. And it may be that some people without math backgrounds would have this experience for more examples in the text, and in life. I should probably try and work this out. But the other fun thing about the passage is how the last sentence could be read as a dry joke at conservatives' expense.
There is another interesting argument based on regression to the mean, relevant to teachers and anyone who considers punishment and reward. People are statistically likely to do better after a very bad performance, and to do worse after a very good one - whether you punish or reward at all:
I had stumbled onto a significant fact of the human condition: the feedback to which life exposes us is perverse. Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty.
Another really interesting topic is that of experts. People tend to be too confident. Experts tend to be WAY too confident, even when results are essentially random. Kahneman offers convincing evidence that the financial markets, at least investment, are essentially random. And yet everybody in the business thinks they're so damn GOOD at it.
...the illusions of validity and skill are supported by a powerful professional culture. We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers.
And the expert delusion is valid in social science fields too.
Each of these domains entails a significant degree of uncertainty and unpredictability. We describe them as "low-validity environments." In every case, the accuracy of experts was matched or exceeded by a simple algorithm.
That's right, a simply algorithm is better than an expert, mostly because experts tend to make over-confident, over-extreme predictions, that are easily way off if you wait and check. And it doesn't even have to be a particularly GOOD algorithm. Kahneman mentions Robyn Dawes's 1979 article "The Robust Beauty of Improper Linear Models in Decision Making", which you can find online:
ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in Paul Meehl's book on clinical versus statistical prediction and a plethora of research stimulated in part by that book all indicates that when a numerical criterion variable (e.g., graduate grade point average) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method; for example, they may be obtained on the basis of intuition, derived from simulating a clinical judge's predictions, or set to be equal. This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The article discusses, in some detail, the application of unit weights to decide what bullet the Denver Police Department should use. Finally, the article considers commonly raised technical, psychological, and ethical resistances to using linear models to make important social decisions and presents arguments that could weaken these resistances.
A further problem related to experts, is that if you do happen to be an intelligent expert, aware of your fallibility, people won't trust you:
Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competitors, who are better able to gain the trust of clients. An unbiased appreciation of uncertainty is a cornerstone of rationality - but it is not what people and organizations want. Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high.
This is very interesting to me, because an expert who knows she is fallible and also knows people won't trust her if she says so can take the justifiable approach of feigning confidence in an effort to favorably influence a situation. The effect is that people who are trustworthy sound exactly like people who aren't. Fascinating.
It reminds me of the concerns around reporting confidence intervals or margins of error. If you are intelligent, you know what they are. But if you report them, people who don't understand will think you are less trustworthy. I would argue that if possible you should only tell intelligent, informed people about your margins of error, and leave them off when talking to other people. Of course this is kind of condescending, but could be better than having the majority of people think they can discredit you because "he even admits he could be wrong!" Of course it's difficult to report differently to different people, up to considering the readership of a periodical, etc.
And the last interesting thing in the book is about happiness. Kahneman looked into how good people's lives are. You can do this two ways: asking people how they feel about their lives overall, or looking at how they feel moment by moment through the day. Kahneman puts more weight on the latter, which I think is a pretty fair choice. He measures it by "U-index" which is sort of the measure of how much you're unhappy per day.
The use of time is one of the areas of life over which people have some control. Few individuals can will themselves to have a sunnier disposition, but some may be able to arrange their lives to spend less of their day commuting, and more time doing things they enjoy with people they like. The feelings associated with different activities suggest that another way to improve experience is to switch from passive leisure, such as TV watching, to more active forms of leisure, including socializing and exercise. From the social perspective, improved transportation for the labor force, availability of child care for working women, and improved socializing opportunities for the elderly may be relatively efficient ways to reduce the U-index of society - even a reduction by 1% would be a significant achievement, amounting to millions of hours of avoided suffering.
I was interested in his comments on religion:
Religious participation also has relatively greater favorable impact on both positive affect and stress reduction than on life evaluation. Surprisingly, however, religion provides no reduction of feelings of depression or worry.
He also had a chart that seemed to suggest getting married made you less happy in the long run, but then he argued that we really shouldn't interpret it that way. Good? Well, I'll finish with what I thought was probably the most feel-good moment of the whole darn book:
It is only a slight exaggeration to say that happiness is the experience of spending time with people you love and who love you.
1 comment:
I have been reading many books last few years on how we make decisions and how emotions play a major role in them. All of them mentioned Daniel in their books. So it was but natural for me to read the original and wow! it is the book which really lays out our decision system scientifically. This is one of the book which will require re-reading.
Post a Comment