Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
—Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid
Following the tradition established by Scott Aaronson of Umeshisms and Malthusianisms, I propose the term “Hofstadterism”.
A Hofstadterism is a principle which claims that you should adjust your thinking, or your analysis of some situation, in a particular direction—and that the principle remains applicable even if you think you’ve already accounted for it.
The concept isn’t particularly closely related to the general philosophy or works of Douglas Hofstadter; I’m just using the term as a generalization of the eponymous law quoted above. In its original context, Hofstadter’s law was a commentary on predictions of artificial intelligence; famously, on more than one occasion in the history of AI, seemingly-promising initial progress led to widespread optimistic projections of future progress that then failed to arrive on schedule. Since then, “Hofstadter’s law” has been used more broadly to refer to the planning fallacy.
Hofstadterisms seem paradoxical. If the correct answer is always to update in the same direction—in the original example, to always make your estimated completion time later, no matter how late it already is—then don’t you end up predicting that it will take an infinite amount of time?
If you apply the principle literally, yes. (Hofstadterisms do not make good machine-learning rules.) However, humans don’t actually do this; even if you really take a Hofstadterism seriously, you’re not actually in real life going to apply it infinitely many times. (Hence the saying, which I unfortunately can’t find a source for on Google: “Any infinite regression is at most three levels deep.” I suppose you could think of this as the anti-Hofstadterism.) In practice, you’re eventually going to arrive at what seems like the position which best balances all the relevant factors that you know. Hofstadterisms are useful when we know, from the outside view, of a tendency for this seemingly-balanced analysis to actually end up being skewed in a particular direction. They offer the opportunity to correct for those biases which remain even after everything appears to be corrected for.
One of the most important kinds of Hofstadterism is the ethical injunction—at least, according to the way such injunctions are used by consequentialists (as opposed to actual deontologists). In theory, consequentialists ought not to have any absolute rules of ethics, other than the fundamental rule of seeking the best possible consequences—which provides no definite constraints over our actions. In practice, we find that certain rules are not merely useful, but essential—to the point where, if you think that it’s right for you to abandon them, you’re wrong. Hence the paradoxical-sounding principle: “For the good of the tribe, do not cheat to seize power even for the good of the tribe.”
One important pitfall to beware of is to try to apply a Hofstadterism to a situation that’s actually a memetic prevalence debate.
To follow the original example from that post: Suppose you believe that our culture demonizes selfishness, and this distresses you because you’re afraid that it makes people psychologically unhealthy. You try to fight this by promoting selfishness as a virtue, giving people in your social group copies of Atlas Shrugged, whatever. Suppose you spread this idea, and it starts to take hold in your social environment, to the point where you hear others espousing it—and yet you still see so many people feeling bad about having needs and wanting to do things for themselves. You might be tempted to think that a Hofstadterism applies here: “Everyone ought to be more selfish, even if they think that they’ve already accounted for the idea that they ought to be more selfish.”
It’s hypothetically possible that you’ve uncovered a deep and insidious bias that pervades human nature. But that’s not the most likely possibility. In this scenario, you should instead consider the possibility that your social environment has become an echo chamber, and that people outside it simply never got the message in the first place.
Hofstadterisms are a powerful tool. Use them wisely.
P. S. Also in the tradition of Scott Aaronson: Anyone have any other ideas for particular Hofstadterisms?
You assume too much about the motives of others, even after you take into account that you assume too much about the motives of others.
I’ve been told many times over the last few months that you’re never adequately prepared for your first child. Will report back how true this is.
LikeLiked by 2 people
Pingback: Going on holiday is… silence in your head | From guestwriters
Scott Aaronson speculates that most people don’t follow inferential chains more than two or three steps here, using an example which involves infinite regress, which could be what you’re thinking of.
Link apparently didn’t work. URL is: http://www.scottaaronson.com/blog/?p=232
Actually, I found the source; it was a Facebook post by Eliezer Yudkowsky.