There’s an old Jewish folk tale (or possibly Chinese, depending on who you ask) that Wikipedia calls the allegory of the long spoons. The version that I learned growing up in Blue Tribe church was called “The Difference Between Heaven and Hell”, and it went like this:
Long ago there lived an old woman who had a wish. She wished more than anything to see for herself the difference between heaven and hell. The monks in the temple agreed to grant her request. They put a blindfold around her eyes, and said, “First you shall see hell.”
When the blindfold was removed, the old woman was standing at the entrance to a great dining hall. The hall was full of round tables, each piled high with the most delicious foods — meats, vegetables, fruits, breads, and desserts of all kinds! The smells that reached her nose were wonderful.
The old woman noticed that, in hell, there were people seated around those round tables. She saw that their bodies were thin, and their faces were gaunt, and creased with frustration. Each person held a spoon. The spoons must have been three feet long! They were so long that the people in hell could reach the food on those platters, but they could not get the food back to their mouths. As the old woman watched, she heard their hungry desperate cries. “I’ve seen enough,” she cried. “Please let me see heaven.”
And so again the blindfold was put around her eyes, and the old woman heard, “Now you shall see heaven.” When the blindfold was removed, the old woman was confused. For there she stood again, at the entrance to a great dining hall, filled with round tables piled high with the same lavish feast. And again, she saw that there were people sitting just out of arm’s reach of the food with those three-foot long spoons.
But as the old woman looked closer, she noticed that the people in heaven were plump and had rosy, happy faces. As she watched, a joyous sound of laughter filled the air.
And soon the old woman was laughing too, for now she understood the difference between heaven and hell for herself. The people in heaven were using those long spoons to feed each other.
If you found this a little glurgey and of questionable value in delivering an actual moral lesson, well, that makes at least two of us. But even if Wikipedia calls it an allegory, a metaphor can still be applicable in more than one domain.
My suggestion is that this story can be a metaphor for dealing with cognitive bias.
The idea is that there are some things that we can do for other people more easily than we can do them for ourselves. This isn’t garden-variety comparative advantage; this is the idea that sometimes we have a comparative disadvantage in dealing with something that affects us, specifically because it affects us instead of somebody else. This isn’t the case in most domains, but I think it may be the case in the domain of rationality. We all know that identifying skewed thinking from the inside is really hard, since many biases insidiously warp our thinking in such a way as to prevent us from seeing them.
One thing I’ve noticed is that occasionally, when I’m developing or expressing an opinion on something—particularly questions of political significance, in the “tribal politics” sense, but sometimes in other domains—I have this vague sense that my thought process might not be entirely trustworthy. It feels as though there’s something going on in my brain that shapes my beliefs around tribal affiliation, or some other bias, rather than correct reasoning. Unfortunately, this is where my self-awareness seems to end; pushing harder on this feeling doesn’t reveal any clues as to where the fault might lie.
According to the message that I most often see promoted in the rationality community, you must cultivate the extremely difficult skill of pushing through that feeling, seeing the distortions in your thought process for what they are, and fixing them—and that, while of course you can cultivate this skill alongside others, in the end, you are on your own. In this view, the ultimate goal is complete cognitive self-reliance.
I want to suggest a different, complementary approach: treating rationality as a social process.
If cognitive bias is causing me to say something obviously stupid about a particular topic, then other people are likely to notice what’s going on better than I am; indeed, if this weren’t the case, then the rationality community wouldn’t have been able to recognize recurring failure modes in domains like politics. So if that vague feeling comes into my brain, and I suspect that this is in fact what’s going on, might others be able to help me see through it? “Hey, I feel like idea X must be true, because argument Y, but I also feel like I’ve got a blind spot here and am failing to account for something obvious—does any of this sound wrong to you?”
It is better to find one fault in yourself than a thousand in someone else—but if finding a fault in someone else is more than a thousand times easier, then that implies the highest-expected-value thing to do is look for faults in each other.
(Especially since most of us are never going to be completely cognitively self-reliant no matter how hard we try, as even the most ardent rationality evangelists will acknowledge. And since, taking the outside view, most people who think that they are completely cognitively self-reliant are wrong.)
Of course, there are some obvious failure modes that rationality-as-a-social-process can fall into, and it won’t work in just any social context. In an environment where treating arguments as soldiers is already completely normalized, asking people who disagree with you to tell you how your opinions are biased isn’t going to bring you any closer to the truth. Sometimes you have to defect in the prisoner’s dilemma, so to speak. This implies that, if we care about finding truth, we should work to create spaces where this kind of constructive criticism is normalized, and participants in the discourse can have an expectation—backed up by social norms which are enforced in the usual ways—that a request for such criticism won’t be taken as an opportunity for an opposing “army” to gain ground without similarly subjecting itself to potential criticism. And the other big issue is trust; this whole process does no good unless I can take the critic’s assessment of my rationality seriously, which means I have to trust their rationality, as well as their good intentions.
Overall, despite the very real pitfalls, I think that the role of feedback from others in rationality is underappreciated, and that we who seek to overcome our biases would do well to rely more heavily on it. Of course, I could be totally wrong about this—but that’s what the comments section is for.