Background: Vaguely inspired by “On Measuring Tradeoffs in Effective Altruism”, by Ozy Frantz on Thing of Things.
Content warning: Scrupulosity.
Causal modeling is a way of trying to predict the consequences of something that you might do or that might happen, based on cause-and-effect relationships. For instance, if I drop a bowling ball while I’m holding it in front of me, gravity will cause it to accelerate downward until it lands on my foot. This, in turn, will cause me to experience pain. Since I don’t want that, I can infer from this causal model that I should not drop the bowling ball.
People are predictable in certain ways; consequently, other people’s actions and mental states can be included in a causal model. For example, if I bake cookies for my friend who likes cookies, I predict that this will cause her to feel happy, and that will cause her to express her appreciation. Conversely, if I forget her birthday, I predict that this will cause her to feel interpersonally neglected.
All of this is obvious; it’s what makes social science work (not to mention advertising, competitive games, military strategy, and so forth).
But what happens when you include your own actions as effects in a causal model? Or your own mental states as causes in it?
We can come up with trivial examples: If I’m feeling hungry, I predict that this will cause me to go to the kitchen and make a snack. But this doesn’t really tell me anything useful; if that situation comes up, this analysis plays no part in my decision to make a snack. I just do it because I’m hungry, not because I know that my hunger causes me to do it. (Of course, the reason I do it is because I predict that, if I make a snack and eat it, I will stop being hungry; and that causal model does play an important role in my decision. But in that case, my actions are the cause and my mental state is the effect, whereas for the purposes of this post I’m interested in causal models where the reverse is true.)
Here’s a nontrivial example: If I take a higher-paying job that requires me to move to a city where I have no friends (and don’t expect to easily make new ones) in order to donate the extra income to an effective charity, I predict that this will cause me to feel lonely and demoralized, which will cause me to resent the ethical obligations towards charity that I’ve chosen to adopt. This will make me less positively inclined towards effective altruism and less likely to continue donating.
This wasn’t entirely hypothetical (and I did not in fact take the higher-paying job, opting instead to stay in my preferred city). Furthermore, I see effective altruists frequently use this sort of argument as a rationale for not making sacrifices that are larger than they are willing to make. (Such as taking a job you don’t like, or capping your income, or going vegan, or donating a kidney.)
I believe that we ought to be more careful about this than we currently are. Hence the title of this post.
The thing about these predictions is that they can become self-fulfilling prophecies. At the end of the day, you’re the one who decides your actions. If you give yourself an excuse not to ask “okay, but what if I did the thing anyway?” then you’re more likely to end up deciding not to do the thing. Which may have been the desired outcome all along—I really didn’t want to move—but if you’re not honest with yourself about your reasons for doing what you’re doing, that can screw over your future decision-making process. Not to mention that the thing you’re not doing, may, in fact, have been the right thing to do. Maybe you even knew that it was.
(The post linked at the top provides a framework which mitigates this problem a bit in the case of effective altruism, but doesn’t eliminate it. You still have to defend your decision not to increase your total altruism budget in a given category—or overall, if you go with one of the alternative approaches Ozy mentions in passing that involve quantifying and offsetting the value of an action.)
But the other thing about the Dark Arts is that sometimes we need to use them. In the case of causal self-modeling, that’s because sometimes your causal self-model is accurate. If I devoted all my material and mental resources to effective altruism, I probably really would burn out quickly.
The thing about that assessment is that it’s based on the outside view, not on my personal knowledge of my own psyche. This provides a defense against this kind of self-deception.
Similarly, a valuable question to ask is: To what extent is the you who’s making this causal model right now, the same you as the you who’s going to make the relevant decisions? This is how I justify my decision not to go vegan at this time. I find it difficult to find foods that I like, and I predict that if I stopped eating dairy I would eat very poorly and my health would take a turn for the worse. That would be the result of in-the-moment viscerally-driven reactions that I can predict, but not control, from here while making my far-mode decision.
So in the end, we do have to make these kinds of models, and there are ways to protect ourselves from bias while doing so. But we should never forget that it’s a dangerous game.
Further reading: “Dark Arts of Rationality”, by Nate Soares on Less Wrong.
My understanding of the term “dark art” in the LW vernacular is that it refers to strategies that intentionally exploit cognitive biases to achieve certain beliefs in your audience. This seems straightforwardly generalizable to exploiting weaknesses to achieve certain outcomes, but even them I’m not sure how it applies here?
LikeLike