Probability: A Moment of Darkness

This was the Moment of Darkness speech that I gave at Boston Secular Solstice 2016.

Epistemic status: I spent several weeks thinking about this, but wrote it in a couple hours before the ceremony, because writing is aversive and I’m an inveterate procrastinator. Although I believe the claim about power laws to be true in some broad sense, this is based primarily on half-remembered “conventional wisdom” that I suspect I absorbed by cultural osmosis from the works of Nassim Nicholas Taleb. It is nowhere near as well-justified in the speech itself as it ought to be; the two statistics cited were the only ones I could find in the time available.


Our community is ambitious. We aim to do big things, to solve adult problems.

Four years ago, in New York City, in a ceremony much like this one, Ray Arnold, the creator of Secular Solstice, spoke about this. He said:

We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize everything ever. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.

And while they aren’t in this room, there are people we know who would be here if they could, who are doing their part to try and solve this whole death problem once and for all.

And I don’t know whether and how well any of us are going to succeed at any of these things, but…

God damn, people. You people are amazing, and even if only one of you made a dent in some of the problems you’re working on, that… that would just be incredible.

Indeed it would. It would be incredible. And I believe that the same is true of the people in this room today, four years later. We recognize that these things matter, that they matter more than most of society recognizes, more even than any of us can really visualize. The utilitarian significance of even the least of those problems is easily in the millions of lives.

But we are also a community of truth seekers. It’s not enough for the stories we tell ourselves about what we’re going to accomplish to be motivating; they have to be accurate. So, what should we expect when we set out to change the world?

Large scale technological and other change in society tends to obey power laws; a small minority of individual cases wind up having the vast majority of the impact. Y Combinator, the most prestigious startup accelerator in the world, has funded about a thousand startups; about three-quarters of their returns have come from just two of these. Making a big profit is a crude proxy for impact through a startup, but if we run with it, that gives us a probability of 0.3%. And that’s one of the higher-risk-higher-reward life paths available to someone today.

I didn’t have time to find more numbers, but as far as I know, they basically all look roughly like this. According to a study by the Financial Times [PDF], 40% of millennials think they’ll have a global impact. In other words, almost 40% of millennials are extremely miscalibrated.

So we see that truly changing the world is a rare event. But that, too, is part of the standard story; because it’s rare, you’ll have to work very hard and be very resourceful to pull it off. But you can still do it if you try, right?

This is the kind of thing that our brains tell us because of availability bias. As Bruce Schneier put it: “We tend to exaggerate [the probability of] spectacular, strange and rare events, and downplay ordinary, familiar and common ones.” He was talking about things like terrorist attacks, but it’s just as true of things like making a major scientific breakthrough. “Stories engage us at a much more visceral level [than data], especially stories that are vivid, exciting or personally involving.”

In Terry Pratchett’s Discworld, “million-to-one chances crop up nine times out of ten”. It’s not any different from other fiction in this regard; it’s just more honest. Nonfiction isn’t much better because of selection bias; you hear about the guy who won the Nobel, and not about the hundreds of thousands of others who set out to win it and failed.

Eliezer Yudkowsky summed up the problem in “A Technical Explanation of Technical Explanations”:

Remember Spock from Star Trek? Spock often says something along the lines of, “Captain, if you steer the Enterprise directly into a black hole, our probability of survival is only 2.837%.” Yet nine times out of ten the Enterprise is not destroyed. The people who write this stuff have no idea what scientists mean by “probability”. They suppose that a probability of 99.9% is something like feeling really sure. They suppose that Spock’s statement expresses the challenge of successfully steering the Enterprise through a black hole, like a video game rated five stars for difficulty. What we mean by “probability” is that if you utter the words “two percent probability” on fifty independent occasions, it better not happen more than once.

(At least, not in the limit.)

There are fewer than fifty of us in this room, and the probability of an individual achieving the kinds of ambitions we’re talking about here is probably lower than two percent.

So that’s the problem: if you want to change the world, the outside view says it’s probably going to be all for nothing. Any attempt to do so in a way that avoids that problem is unlikely to have a large individual impact.

This problem is so omnipresent in any attempt to do something big that it’s common to just ignore it as background noise. We shouldn’t. We seek the truth because avoiding it inevitably leads to failure when dealing with hard problems. If we’re serious about solving them, we have to face this harsh reality head-on.

So what do we do about it?

One option is to give up, and stick to easy things that are likely to work. You can optimize for your own happiness pretty well that way. But this is no solution at all; we have to do the hard things because, as the Comet King said, “Somebody has to and no one else will.” At least, not on the margin.

Another option is unjustified optimism. This seems to be unfortunately common in our community. Some higher-level rationalists have implemented an advanced form of this where they make their monkey brains believe one thing even as they know the truth to be something else. This might work, if you can pull it off.

A third option is to forcibly suppress your monkey brain’s risk-aversion and submit yourself in obedience to the power laws: spend your life doing the risky thing, knowing that it will probably all be for naught, because it’s the right thing to do. If you’re working on more incremental things like malaria nets, this can work out okay. You can be like the man who throws the starfish back into the sea every day; others may say it doesn’t matter because more will always wash up, but it mattered to that one. If you’re aiming for something more all-or-nothing, like getting AGI right, then this may be psychologically harder.

But there’s one other option, and I think this is the one I like best: Embrace the community.

This works because of two reasons. The first is that, when you’re in a community of people with similar goals, you don’t have to do everything yourself. You can clerk the office that handles the funding, or host the convention that summons the family. You can be one of the people who makes the future turn out okay, even if you’re not the one-in-a-million who makes the scientific breakthrough that fixes everything.

The second reason is that our community might actually be awesome enough to beat the outside-view odds.

Four years ago, when Ray spoke those words, it didn’t necessarily look that way. That was around the time I found the community online, though I did not meet most of you until later. To be sure, some interesting things had already happened by then, and maybe the future will look back and see some of them as the beginning of the Great Change (or maybe not). But in those past four years, I’ve been lucky enough to watch us grow up, and start accumulating a track record of changing the world in real ways.

We brought AI risk from a topic argued about by nerds on internet forums to an issue taken seriously by the world’s foremost experts in the relevant fields, by billionaire influencers like Bill Gates and Elon Musk, and, as of last October, by the sitting President of the United States.

We solved the problem of figuring out which charities actually work for rescuing people from global poverty and moved hundreds of millions of dollars to them, and that number’s only going to grow.

With help from allied coalitions, we got giant corporations to dump the worst offenders of factory farming, banned confinement cages right here in Massachusetts, and brought funding and attention to an industry that has invented the most convincing meat substitute that has ever existed.

Not everything we’ve attempted has yet borne such undeniable fruit, and we’ve had our share of failures. But I think we’ve earned the right to call ourselves a community that’s capable of getting results, and I think there’ll be more soon, including in areas we haven’t even thought of yet.

The law of the universe is that you can’t beat the odds. But conditional on the right things? Yeah, I think you can.

The Dark Art of Causal Self-Modeling

Background: Vaguely inspired by “On Measuring Tradeoffs in Effective Altruism”, by Ozy Frantz on Thing of Things.

Content warning: Scrupulosity.


Causal modeling is a way of trying to predict the consequences of something that you might do or that might happen, based on cause-and-effect relationships. For instance, if I drop a bowling ball while I’m holding it in front of me, gravity will cause it to accelerate downward until it lands on my foot. This, in turn, will cause me to experience pain. Since I don’t want that, I can infer from this causal model that I should not drop the bowling ball.

People are predictable in certain ways; consequently, other people’s actions and mental states can be included in a causal model. For example, if I bake cookies for my friend who likes cookies, I predict that this will cause her to feel happy, and that will cause her to express her appreciation. Conversely, if I forget her birthday, I predict that this will cause her to feel interpersonally neglected.

All of this is obvious; it’s what makes social science work (not to mention advertising, competitive games, military strategy, and so forth).

But what happens when you include your own actions as effects in a causal model? Or your own mental states as causes in it?

We can come up with trivial examples: If I’m feeling hungry, I predict that this will cause me to go to the kitchen and make a snack. But this doesn’t really tell me anything useful; if that situation comes up, this analysis plays no part in my decision to make a snack. I just do it because I’m hungry, not because I know that my hunger causes me to do it. (Of course, the reason I do it is because I predict that, if I make a snack and eat it, I will stop being hungry; and that causal model does play an important role in my decision. But in that case, my actions are the cause and my mental state is the effect, whereas for the purposes of this post I’m interested in causal models where the reverse is true.)

Here’s a nontrivial example: If I take a higher-paying job that requires me to move to a city where I have no friends (and don’t expect to easily make new ones) in order to donate the extra income to an effective charity, I predict that this will cause me to feel lonely and demoralized, which will cause me to resent the ethical obligations towards charity that I’ve chosen to adopt. This will make me less positively inclined towards effective altruism and less likely to continue donating.

This wasn’t entirely hypothetical (and I did not in fact take the higher-paying job, opting instead to stay in my preferred city). Furthermore, I see effective altruists frequently use this sort of argument as a rationale for not making sacrifices that are larger than they are willing to make. (Such as taking a job you don’t like, or capping your income, or going vegan, or donating a kidney.)

I believe that we ought to be more careful about this than we currently are. Hence the title of this post.

The thing about these predictions is that they can become self-fulfilling prophecies. At the end of the day, you’re the one who decides your actions. If you give yourself an excuse not to ask “okay, but what if I did the thing anyway?” then you’re more likely to end up deciding not to do the thing. Which may have been the desired outcome all along—I really didn’t want to move—but if you’re not honest with yourself about your reasons for doing what you’re doing, that can screw over your future decision-making process. Not to mention that the thing you’re not doing, may, in fact, have been the right thing to do. Maybe you even knew that it was.

(The post linked at the top provides a framework which mitigates this problem a bit in the case of effective altruism, but doesn’t eliminate it. You still have to defend your decision not to increase your total altruism budget in a given category—or overall, if you go with one of the alternative approaches Ozy mentions in passing that involve quantifying and offsetting the value of an action.)

But the other thing about the Dark Arts is that sometimes we need to use them. In the case of causal self-modeling, that’s because sometimes your causal self-model is accurate. If I devoted all my material and mental resources to effective altruism, I probably really would burn out quickly.

The thing about that assessment is that it’s based on the outside view, not on my personal knowledge of my own psyche. This provides a defense against this kind of self-deception.

Similarly, a valuable question to ask is: To what extent is the you who’s making this causal model right now, the same you as the you who’s going to make the relevant decisions? This is how I justify my decision not to go vegan at this time. I find it difficult to find foods that I like, and I predict that if I stopped eating dairy I would eat very poorly and my health would take a turn for the worse. That would be the result of in-the-moment viscerally-driven reactions that I can predict, but not control, from here while making my far-mode decision.

So in the end, we do have to make these kinds of models, and there are ways to protect ourselves from bias while doing so. But we should never forget that it’s a dangerous game.


Further reading: “Dark Arts of Rationality”, by Nate Soares on Less Wrong.

Hofstadterisms

Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid


Following the tradition established by Scott Aaronson of Umeshisms and Malthusianisms, I propose the term “Hofstadterism”.

A Hofstadterism is a principle which claims that you should adjust your thinking, or your analysis of some situation, in a particular direction—and that the principle remains applicable even if you think you’ve already accounted for it.

The concept isn’t particularly closely related to the general philosophy or works of Douglas Hofstadter; I’m just using the term as a generalization of the eponymous law quoted above. In its original context, Hofstadter’s law was a commentary on predictions of artificial intelligence; famously, on more than one occasion in the history of AI, seemingly-promising initial progress led to widespread optimistic projections of future progress that then failed to arrive on schedule. Since then, “Hofstadter’s law” has been used more broadly to refer to the planning fallacy.

Hofstadterisms seem paradoxical. If the correct answer is always to update in the same direction—in the original example, to always make your estimated completion time later, no matter how late it already is—then don’t you end up predicting that it will take an infinite amount of time?

If you apply the principle literally, yes. (Hofstadterisms do not make good machine-learning rules.) However, humans don’t actually do this; even if you really take a Hofstadterism seriously, you’re not actually in real life going to apply it infinitely many times. (Hence the saying, which I unfortunately can’t find a source for on Google: “Any infinite regression is at most three levels deep.” I suppose you could think of this as the anti-Hofstadterism.) In practice, you’re eventually going to arrive at what seems like the position which best balances all the relevant factors that you know. Hofstadterisms are useful when we know, from the outside view, of a tendency for this seemingly-balanced analysis to actually end up being skewed in a particular direction. They offer the opportunity to correct for those biases which remain even after everything appears to be corrected for.

One of the most important kinds of Hofstadterism is the ethical injunction—at least, according to the way such injunctions are used by consequentialists (as opposed to actual deontologists). In theory, consequentialists ought not to have any absolute rules of ethics, other than the fundamental rule of seeking the best possible consequences—which provides no definite constraints over our actions. In practice, we find that certain rules are not merely useful, but essential—to the point where, if you think that it’s right for you to abandon them, you’re wrong. Hence the paradoxical-sounding principle: “For the good of the tribe, do not cheat to seize power even for the good of the tribe.”

One important pitfall to beware of is to try to apply a Hofstadterism to a situation that’s actually a memetic prevalence debate.

To follow the original example from that post: Suppose you believe that our culture demonizes selfishness, and this distresses you because you’re afraid that it makes people psychologically unhealthy. You try to fight this by promoting selfishness as a virtue, giving people in your social group copies of Atlas Shrugged, whatever. Suppose you spread this idea, and it starts to take hold in your social environment, to the point where you hear others espousing it—and yet you still see so many people feeling bad about having needs and wanting to do things for themselves. You might be tempted to think that a Hofstadterism applies here: “Everyone ought to be more selfish, even if they think that they’ve already accounted for the idea that they ought to be more selfish.”

It’s hypothetically possible that you’ve uncovered a deep and insidious bias that pervades human nature. But that’s not the most likely possibility. In this scenario, you should instead consider the possibility that your social environment has become an echo chamber, and that people outside it simply never got the message in the first place.

Hofstadterisms are a powerful tool. Use them wisely.


P. S. Also in the tradition of Scott Aaronson: Anyone have any other ideas for particular Hofstadterisms?

Rationality as a Social Process

There’s an old Jewish folk tale (or possibly Chinese, depending on who you ask) that Wikipedia calls the allegory of the long spoons. The version that I learned growing up in Blue Tribe church was called “The Difference Between Heaven and Hell”, and it went like this:

Long ago there lived an old woman who had a wish. She wished more than anything to see for herself the difference between heaven and hell. The monks in the temple agreed to grant her request. They put a blindfold around her eyes, and said, “First you shall see hell.”

When the blindfold was removed, the old woman was standing at the entrance to a great dining hall. The hall was full of round tables, each piled high with the most delicious foods — meats, vegetables, fruits, breads, and desserts of all kinds! The smells that reached her nose were wonderful.

The old woman noticed that, in hell, there were people seated around those round tables. She saw that their bodies were thin, and their faces were gaunt, and creased with frustration. Each person held a spoon. The spoons must have been three feet long! They were so long that the people in hell could reach the food on those platters, but they could not get the food back to their mouths. As the old woman watched, she heard their hungry desperate cries. “I’ve seen enough,” she cried. “Please let me see heaven.”

And so again the blindfold was put around her eyes, and the old woman heard, “Now you shall see heaven.” When the blindfold was removed, the old woman was confused. For there she stood again, at the entrance to a great dining hall, filled with round tables piled high with the same lavish feast. And again, she saw that there were people sitting just out of arm’s reach of the food with those three-foot long spoons.

But as the old woman looked closer, she noticed that the people in heaven were plump and had rosy, happy faces. As she watched, a joyous sound of laughter filled the air.

And soon the old woman was laughing too, for now she understood the difference between heaven and hell for herself. The people in heaven were using those long spoons to feed each other.

If you found this a little glurgey and of questionable value in delivering an actual moral lesson, well, that makes at least two of us. But even if Wikipedia calls it an allegory, a metaphor can still be applicable in more than one domain.

My suggestion is that this story can be a metaphor for dealing with cognitive bias.

The idea is that there are some things that we can do for other people more easily than we can do them for ourselves. This isn’t garden-variety comparative advantage; this is the idea that sometimes we have a comparative disadvantage in dealing with something that affects us, specifically because it affects us instead of somebody else. This isn’t the case in most domains, but I think it may be the case in the domain of rationality. We all know that identifying skewed thinking from the inside is really hard, since many biases insidiously warp our thinking in such a way as to prevent us from seeing them.

One thing I’ve noticed is that occasionally, when I’m developing or expressing an opinion on something—particularly questions of political significance, in the “tribal politics” sense, but sometimes in other domains—I have this vague sense that my thought process might not be entirely trustworthy. It feels as though there’s something going on in my brain that shapes my beliefs around tribal affiliation, or some other bias, rather than correct reasoning. Unfortunately, this is where my self-awareness seems to end; pushing harder on this feeling doesn’t reveal any clues as to where the fault might lie.

According to the message that I most often see promoted in the rationality community, you must cultivate the extremely difficult skill of pushing through that feeling, seeing the distortions in your thought process for what they are, and fixing them—and that, while of course you can cultivate this skill alongside others, in the end, you are on your own. In this view, the ultimate goal is complete cognitive self-reliance.

I want to suggest a different, complementary approach: treating rationality as a social process.

If cognitive bias is causing me to say something obviously stupid about a particular topic, then other people are likely to notice what’s going on better than I am; indeed, if this weren’t the case, then the rationality community wouldn’t have been able to recognize recurring failure modes in domains like politics. So if that vague feeling comes into my brain, and I suspect that this is in fact what’s going on, might others be able to help me see through it? “Hey, I feel like idea X must be true, because argument Y, but I also feel like I’ve got a blind spot here and am failing to account for something obvious—does any of this sound wrong to you?”

It is better to find one fault in yourself than a thousand in someone else—but if finding a fault in someone else is more than a thousand times easier, then that implies the highest-expected-value thing to do is look for faults in each other.

(Especially since most of us are never going to be completely cognitively self-reliant no matter how hard we try, as even the most ardent rationality evangelists will acknowledge. And since, taking the outside view, most people who think that they are completely cognitively self-reliant are wrong.)

Of course, there are some obvious failure modes that rationality-as-a-social-process can fall into, and it won’t work in just any social context. In an environment where treating arguments as soldiers is already completely normalized, asking people who disagree with you to tell you how your opinions are biased isn’t going to bring you any closer to the truth. Sometimes you have to defect in the prisoner’s dilemma, so to speak. This implies that, if we care about finding truth, we should work to create spaces where this kind of constructive criticism is normalized, and participants in the discourse can have an expectation—backed up by social norms which are enforced in the usual ways—that a request for such criticism won’t be taken as an opportunity for an opposing “army” to gain ground without similarly subjecting itself to potential criticism. And the other big issue is trust; this whole process does no good unless I can take the critic’s assessment of my rationality seriously, which means I have to trust their rationality, as well as their good intentions.

Overall, despite the very real pitfalls, I think that the role of feedback from others in rationality is underappreciated, and that we who seek to overcome our biases would do well to rely more heavily on it. Of course, I could be totally wrong about this—but that’s what the comments section is for.