Set List from Boston Secular Solstice 2015

A few people have asked for the list of songs and readings performed at last week’s Secular Solstice. Here they are, with authorship information and links.

This set list was compiled by me, Chelsea Voss, James Babcock, Jeff Kaufman, and Julia Wise. Raymond Arnold, the creator of Secular Solstice, advised us throughout the process, and the program follows the traditional six-part Secular Solstice arc. The event itself was last Friday, December 11, at the MIT Chapel. Jeff, Chelsea, and Demetri Sampas provided piano accompaniment; Julia led the majority of the songs, with various others also leading songs and performing readings. The total duration of the Solstice was a little under two hours.

This post will be updated when the the text of the Moment of Darkness speech has been published online.


Light

Twilight

Eve

Night

Dawn

Light

Opening Speech from Boston Secular Solstice 2015

I opened last week’s Secular Solstice by reading this. I haven’t edited it for publication; it reads like a blog post because that’s how I naturally write. (Probably not the optimal format for a speech, but oh well.)

The ideas contained in here come primarily from Raymond Arnold’s writings on humanist ritual, particularly “The Story of Winter”, and from Kevin Simler’s writings on the evolution and function of religion, particularly “Post-Atheism: Religion Refactored”.


Good evening. Welcome to Secular Solstice.

If you haven’t been to a Secular Solstice before, you might be asking yourself: What exactly are we, as secular humanists, doing here, in this building which is for ostensibly religious services, participating in a ritual which includes readings and group singing and other elements that sound suspiciously like a religious service?

If you have been to a Secular Solstice before, then go ahead and ask yourself that question anyway, because I think it turns out to be kind of a deep question.

Religion has been a feature of most human societies for the past 12,000 years or so, though some aspects of it are older. We don’t entirely understand how it began. One story, which you may have heard before if you attended a previous Solstice in New York, is that human brains have evolved to be good at dealing with other humans, so when early humans encountered natural phenomena like the weather, they tended to attribute them to human-like agents—the gods. And so they would start asking these gods for things like a good harvest, and over time these prayers and practices evolved into religion as we know it today.

It’s a neat story, but it’s not the whole story. Why the social aspects of religion? Why pray and sing and have all these rituals in public, with other people? Why isn’t religion just a personal thing?

Perhaps—and I’m not an anthropologist, I don’t know how much agreement there is on this among experts—but perhaps religion helped tribes to coordinate.

Rituals and music served to provide group bonding experiences. Myths and legends helped give the tribe a sense of shared purpose. Religious laws and commandments helped them live harmoniously with one another. Taken together, these factors could help make the members of a tribe more able to trust one another, to cooperate in prisoner’s dilemmas. And perhaps those tribes that could do this were rewarded with a greater ability to survive and reproduce than those that could not.

And sometimes, they were able to coordinate to do more than just that. Sometimes, they were able to do things like spend hundreds of years dragging hundreds of tons of stones over 150 miles to build something like Stonehenge, the ancient archaeological wonder. We don’t fully understand why Stonehenge was built, though it seems to have had some kind of ritual purpose. One theory, which has garnered a fair bit of academic support, is that Stonehenge was a place of healing. These ancient people coordinated to bring this place into existence so that the sick and injured could be ritually healed there, through the power of ringing sounds that were made by hitting the rocks.

Unfortunately, human physiology doesn’t actually work that way, and so these rituals didn’t really heal the sick. That’s the danger of this kind of cultural evolution; it can only get you so far. It might be useful to have beliefs that give you a sense of belonging with your fellow humans, but it’d be even nicer if these beliefs were, y’know, actually true. So that you don’t spend hundreds of years building a healing center that doesn’t actually work. And these sorts of evolved religions didn’t quite get there.

But of course, I’m not telling you anything here that you don’t already know. The more interesting question for secular humanists is: Can we get these benefits of religion—particularly, in the context of Secular Solstice, the benefits of ritual—without giving up the benefits that come from believing things that are actually true?

Some of us think that we can. And for the fifth year now, Secular Solstice has been a test of that hypothesis.

One way to help make this work, of course, is to be explicit about what exactly we’re trying to do. This wouldn’t have worked for the early humans—they didn’t know enough about their own history or how the universe actually functions—but it can work for us. So, instead of praying to the gods and hoping that this incidentally brings us closer together, I can just tell you directly that the theme for this Solstice is coordination—the benefits it can bring us, the challenges in making it work, and ultimately, why we as a species will need to get better at it, if we want to survive and prosper.

Speaking of the gods…perhaps we shouldn’t dispense with them entirely. They may not exist in a literal sense, but they can still be useful as metaphors. An abstract concept can be easier to wrap your head around if you give it a name and a face, as the ancients did. We may have had more practice with high-level abstract reasoning than they did, but this stuff is still complicated and we need all the help we can get in understanding it. Provided, of course, that we remain aware of what we’re doing. So in that sense, you may hear the names of a few gods spoken as part of this ritual tonight.

So that’s why we’re here tonight. We can bring ourselves closer together, and perhaps, in time, build our own Stonehenge—but one based on the truths we’ve learned about the universe.

Should We Partition Effective Altruist Spaces?

On Tumblr, Bartlebyshop brings up concerns that she’s had with effective altruist writing that have kept her away from it. I don’t think this is an isolated case; I think it indicates a problem that this community has been having.

Supporting and discussing EA encompasses a lot of different things. Some of those things are controversial even within the movement, let alone among society at large. Occasionally, one of these controversies metastasizes and takes on a visibility far out of proportion to its actual importance among most EAs. Within the last couple of weeks, we’ve had this happen with meat at EA Global, with Nick Bostrom’s astronomical waste argument and the role of AI risk within the movement, with the idea that EAs don’t care about art, and I think I’m missing a few things.

Obviously some of this is the unavoidable result of universal human political dynamics, but I do think that, to a certain extent, this is something that we ought to be trying to fix. EAs are disproportionately likely to be the kinds of people who love to argue about things, and it’s important to do so in order to find the most effective things to do, but it can also be tiring when EA spaces have been temporarily taken over by the latest iteration of some perennial argument that you’re not interested in. It may be worth asking ourselves if this is something that could be ameliorated with better social technology.

Although some online EA spaces are devoted to a particular cause or focus area, most of them are fairly general-purpose in terms of what kinds of discussions happen there. With the benefit of hindsight, I think it might have been a good idea to instead set up different spaces for different types of discussions.

In particular, four kinds of discussions come to mind, each of which might do best in its own space:

  • Philosophical arguing. This would be where people could talk about things like consequentialism/metaethics, the drowning child argument, the moral value of the far future and of animals, the relative importance of EA’s major focus areas, etc. Right now, this is the area that I think most needs to be contained; a lot of people are allergic to these kinds of arguments, both because they often lead to extreme conclusions and because they’ve been going on forever and probably aren’t going to be solved to everyone’s satisfaction anytime soon. At the same time, we want these discussions to be part of EA; they are a major reason why many EAs are EAs, and a major guiding force in many EAs’ donation decisions and life decisions in general.
  • Comparative analysis of causes and charities. I confess that this is the area that I personally am most interested in, and I sometimes think that the other areas have driven this one largely out of sight, except at a few blogs like GiveWell’s. (Of course, this is probably because many people find this kind of analysis boring, and that’s fine; we want to have something for everyone.) In spaces devoted to this, we’d have discussions of research by organizations like GiveWell and OpenPhil and ACE, and of what interventions and causes are most promising. (We’d want this to focus on analysis that doesn’t depend on, or that explicitly conditions on, extremely deep value judgments of the kind argued over in the “philosophical arguing” section.)
  • Mutual support. This would be a place for people who’ve allowed EA to shape their lives—by donating a significant percentage of their income, or going vegetarian, or choosing an effectiveness-oriented career path—to talk about their shared life experiences as EAs, and to request and give advice. I think that Giving What We Can has a comparative advantage here, and also that a lot of important discussions in this sphere are happening on blogs like The Unit of Caring on Tumblr.
  • Community organizing. Here we’ve got things like .impact, discussions among meetup organizers, and other sorts of meta-concerns. To a large extent, these things already happen in their own special-purpose spaces, but there might still be more of them in general-purpose spaces than is ideal.

Of course, since the established EA spaces already exist and we can’t change the past, there isn’t room for a clean partition. However, I do wonder if there’s anything that individuals within EA communities can do to move things in this general direction, and whether doing so is likely to be a good idea.

Thoughts?

Is It Morally Obligatory to Precommit Not to Benefit from Negative-Utility Events?

TL;DR: Iff the potential for people like you to benefit from such an event is a significant causal component of the possibility of that event happening.


Consider the following hypothetical situations:

  1. The murder-mystery situation: Your wealthy elderly relative plans to name you as their heir. Should you decline?
  2. You’re a geologist studying a particular active volcano. If it erupted, measurements from the eruption would provide extremely important and valuable data for your research, providing a boost to your career; however, it would also cause mass loss of life and property damage. Should you precommit not to publish any papers using data from such an eruption?
  3. You’re a worker in a high-income country, and a major issue this election season is a proposed protectionist trade reform that would increase the wages of workers like you in domestic industries, but stifle economic opportunities for those in developing countries. Alternatively, if you don’t buy into the economic assumptions that lead to that scenario being bad, the proposed reform is a trade liberalization that would decrease prices of consumer goods in your country, but lead to exploitation of workers in developing countries. Should you precommit not to take a job in a protected industry/buy foreign-produced goods, or, if you have to do so, to buy an ethics offset in the form of a charitable donation?

In all three of these situations, something bad might happen in the future, but you stand to benefit if it does. You can turn down the benefit, but doing so won’t help the people who were harmed by the bad event in the first place. The question is whether, ethically speaking, you should precommit to turn it down.

It would be useful to have a general principle which serves as an answer to this question. At this point it’d be nice to dramatically reveal one, but I kind of already did that at the top of the page. So instead I’ll discuss the two most obvious alternative answers and why I don’t find them satisfactory.

The first alternative answer is an unconditional yes; if you can anticipate a future situation where you would have the chance benefit from something bad, you should precommit not to take that opportunity. This answer is bad because it leaves free utility on the ground. In many cases, it will lead to your pointlessly punishing yourself for something that you had no control over, to no one else’s benefit. Obviously this outcome is to be avoided if possible.

The second alternative answer is an unconditional no; it’s always okay to take whatever opportunities come your way as long as you don’t directly cause anyone else to be harmed in the process. This answer is bad because your future action is not the only causal variable in play; other people’s expectation of what you will do in the future may influence their own behavior. If whether or not you stand to benefit from something affects whether that thing will happen—possibly because someone who’s looking out for your interests has some measure of control over it—then it’s ethically obligatory for you to take this into account and, if you determine that the event is bad overall, make sure that you don’t stand to benefit from it.

Even if you personally have little control over the event in question, it’s appropriate to consider not only the precommitment that you’d make as an individual, but the precommitment that you’d make if you were deciding for everyone who is like you in relevant ways—that is, everyone whose position is close enough to yours and who uses a sufficiently similar reasoning process. Otherwise, you end up with the outcome where everyone defects because it seems individually rational for each of them. (At this point I’d normally wave my hands and say “something something timeless decision theory”, but honestly I don’t yet understand the math well enough to know if that’s at all applicable here.)

The answer I propose is a compromise between these two positions. If the potential for people like you to benefit from a bad event is a significant causal component of the possibility of that event happening—that is, if the event would be significantly less likely to happen if that potential were gone—then you should remove that potential for yourself, by precommiting to decline the benefit. Note that it has to be a significant causal component; for instance, if society as a whole has any say in the event happening vs. not happening, then to the extent that society cares positively about you at all (which it probably does, at least a little), there’s at least that much of a causal component there. But if it’s only that tiny amount, and not a situation where the interests of people like you are the primary driver of the possibility, then feel free to pick up that free utility off the ground.

So, with this in mind, how would I resolve the situations at the top of this post?

  1. If you are, in fact, a character in a murder mystery, then definitely decline; not only is it ethically obligatory, but as an added bonus it also makes you less likely to be suspected! In real life, I think you only need to do this if there’s a significant possibility of the relative being murdered for the inheritance. This is a bit of a gray area, since you never really know, but in general I’d say it’s fine.
  2. No need to make a precommitment here; your research and career have absolutely no causal impact whatsoever on whether the volcano erupts.
  3. Here I think you should precommit not to benefit. The reason the bad trade policy is being proposed is presumably because workers/consumers in your country want it, and you yourself fall in that class. You’re similar enough to other members of it that you should make the precommitment, because if they all did the same, then the bad proposal would go away (since nobody would have any political incentive to back it) and the people in developing countries would benefit.

The Dark Art of Causal Self-Modeling

Background: Vaguely inspired by “On Measuring Tradeoffs in Effective Altruism”, by Ozy Frantz on Thing of Things.

Content warning: Scrupulosity.


Causal modeling is a way of trying to predict the consequences of something that you might do or that might happen, based on cause-and-effect relationships. For instance, if I drop a bowling ball while I’m holding it in front of me, gravity will cause it to accelerate downward until it lands on my foot. This, in turn, will cause me to experience pain. Since I don’t want that, I can infer from this causal model that I should not drop the bowling ball.

People are predictable in certain ways; consequently, other people’s actions and mental states can be included in a causal model. For example, if I bake cookies for my friend who likes cookies, I predict that this will cause her to feel happy, and that will cause her to express her appreciation. Conversely, if I forget her birthday, I predict that this will cause her to feel interpersonally neglected.

All of this is obvious; it’s what makes social science work (not to mention advertising, competitive games, military strategy, and so forth).

But what happens when you include your own actions as effects in a causal model? Or your own mental states as causes in it?

We can come up with trivial examples: If I’m feeling hungry, I predict that this will cause me to go to the kitchen and make a snack. But this doesn’t really tell me anything useful; if that situation comes up, this analysis plays no part in my decision to make a snack. I just do it because I’m hungry, not because I know that my hunger causes me to do it. (Of course, the reason I do it is because I predict that, if I make a snack and eat it, I will stop being hungry; and that causal model does play an important role in my decision. But in that case, my actions are the cause and my mental state is the effect, whereas for the purposes of this post I’m interested in causal models where the reverse is true.)

Here’s a nontrivial example: If I take a higher-paying job that requires me to move to a city where I have no friends (and don’t expect to easily make new ones) in order to donate the extra income to an effective charity, I predict that this will cause me to feel lonely and demoralized, which will cause me to resent the ethical obligations towards charity that I’ve chosen to adopt. This will make me less positively inclined towards effective altruism and less likely to continue donating.

This wasn’t entirely hypothetical (and I did not in fact take the higher-paying job, opting instead to stay in my preferred city). Furthermore, I see effective altruists frequently use this sort of argument as a rationale for not making sacrifices that are larger than they are willing to make. (Such as taking a job you don’t like, or capping your income, or going vegan, or donating a kidney.)

I believe that we ought to be more careful about this than we currently are. Hence the title of this post.

The thing about these predictions is that they can become self-fulfilling prophecies. At the end of the day, you’re the one who decides your actions. If you give yourself an excuse not to ask “okay, but what if I did the thing anyway?” then you’re more likely to end up deciding not to do the thing. Which may have been the desired outcome all along—I really didn’t want to move—but if you’re not honest with yourself about your reasons for doing what you’re doing, that can screw over your future decision-making process. Not to mention that the thing you’re not doing, may, in fact, have been the right thing to do. Maybe you even knew that it was.

(The post linked at the top provides a framework which mitigates this problem a bit in the case of effective altruism, but doesn’t eliminate it. You still have to defend your decision not to increase your total altruism budget in a given category—or overall, if you go with one of the alternative approaches Ozy mentions in passing that involve quantifying and offsetting the value of an action.)

But the other thing about the Dark Arts is that sometimes we need to use them. In the case of causal self-modeling, that’s because sometimes your causal self-model is accurate. If I devoted all my material and mental resources to effective altruism, I probably really would burn out quickly.

The thing about that assessment is that it’s based on the outside view, not on my personal knowledge of my own psyche. This provides a defense against this kind of self-deception.

Similarly, a valuable question to ask is: To what extent is the you who’s making this causal model right now, the same you as the you who’s going to make the relevant decisions? This is how I justify my decision not to go vegan at this time. I find it difficult to find foods that I like, and I predict that if I stopped eating dairy I would eat very poorly and my health would take a turn for the worse. That would be the result of in-the-moment viscerally-driven reactions that I can predict, but not control, from here while making my far-mode decision.

So in the end, we do have to make these kinds of models, and there are ways to protect ourselves from bias while doing so. But we should never forget that it’s a dangerous game.


Further reading: “Dark Arts of Rationality”, by Nate Soares on Less Wrong.

Hofstadterisms

Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid


Following the tradition established by Scott Aaronson of Umeshisms and Malthusianisms, I propose the term “Hofstadterism”.

A Hofstadterism is a principle which claims that you should adjust your thinking, or your analysis of some situation, in a particular direction—and that the principle remains applicable even if you think you’ve already accounted for it.

The concept isn’t particularly closely related to the general philosophy or works of Douglas Hofstadter; I’m just using the term as a generalization of the eponymous law quoted above. In its original context, Hofstadter’s law was a commentary on predictions of artificial intelligence; famously, on more than one occasion in the history of AI, seemingly-promising initial progress led to widespread optimistic projections of future progress that then failed to arrive on schedule. Since then, “Hofstadter’s law” has been used more broadly to refer to the planning fallacy.

Hofstadterisms seem paradoxical. If the correct answer is always to update in the same direction—in the original example, to always make your estimated completion time later, no matter how late it already is—then don’t you end up predicting that it will take an infinite amount of time?

If you apply the principle literally, yes. (Hofstadterisms do not make good machine-learning rules.) However, humans don’t actually do this; even if you really take a Hofstadterism seriously, you’re not actually in real life going to apply it infinitely many times. (Hence the saying, which I unfortunately can’t find a source for on Google: “Any infinite regression is at most three levels deep.” I suppose you could think of this as the anti-Hofstadterism.) In practice, you’re eventually going to arrive at what seems like the position which best balances all the relevant factors that you know. Hofstadterisms are useful when we know, from the outside view, of a tendency for this seemingly-balanced analysis to actually end up being skewed in a particular direction. They offer the opportunity to correct for those biases which remain even after everything appears to be corrected for.

One of the most important kinds of Hofstadterism is the ethical injunction—at least, according to the way such injunctions are used by consequentialists (as opposed to actual deontologists). In theory, consequentialists ought not to have any absolute rules of ethics, other than the fundamental rule of seeking the best possible consequences—which provides no definite constraints over our actions. In practice, we find that certain rules are not merely useful, but essential—to the point where, if you think that it’s right for you to abandon them, you’re wrong. Hence the paradoxical-sounding principle: “For the good of the tribe, do not cheat to seize power even for the good of the tribe.”

One important pitfall to beware of is to try to apply a Hofstadterism to a situation that’s actually a memetic prevalence debate.

To follow the original example from that post: Suppose you believe that our culture demonizes selfishness, and this distresses you because you’re afraid that it makes people psychologically unhealthy. You try to fight this by promoting selfishness as a virtue, giving people in your social group copies of Atlas Shrugged, whatever. Suppose you spread this idea, and it starts to take hold in your social environment, to the point where you hear others espousing it—and yet you still see so many people feeling bad about having needs and wanting to do things for themselves. You might be tempted to think that a Hofstadterism applies here: “Everyone ought to be more selfish, even if they think that they’ve already accounted for the idea that they ought to be more selfish.”

It’s hypothetically possible that you’ve uncovered a deep and insidious bias that pervades human nature. But that’s not the most likely possibility. In this scenario, you should instead consider the possibility that your social environment has become an echo chamber, and that people outside it simply never got the message in the first place.

Hofstadterisms are a powerful tool. Use them wisely.


P. S. Also in the tradition of Scott Aaronson: Anyone have any other ideas for particular Hofstadterisms?

Rationality as a Social Process

There’s an old Jewish folk tale (or possibly Chinese, depending on who you ask) that Wikipedia calls the allegory of the long spoons. The version that I learned growing up in Blue Tribe church was called “The Difference Between Heaven and Hell”, and it went like this:

Long ago there lived an old woman who had a wish. She wished more than anything to see for herself the difference between heaven and hell. The monks in the temple agreed to grant her request. They put a blindfold around her eyes, and said, “First you shall see hell.”

When the blindfold was removed, the old woman was standing at the entrance to a great dining hall. The hall was full of round tables, each piled high with the most delicious foods — meats, vegetables, fruits, breads, and desserts of all kinds! The smells that reached her nose were wonderful.

The old woman noticed that, in hell, there were people seated around those round tables. She saw that their bodies were thin, and their faces were gaunt, and creased with frustration. Each person held a spoon. The spoons must have been three feet long! They were so long that the people in hell could reach the food on those platters, but they could not get the food back to their mouths. As the old woman watched, she heard their hungry desperate cries. “I’ve seen enough,” she cried. “Please let me see heaven.”

And so again the blindfold was put around her eyes, and the old woman heard, “Now you shall see heaven.” When the blindfold was removed, the old woman was confused. For there she stood again, at the entrance to a great dining hall, filled with round tables piled high with the same lavish feast. And again, she saw that there were people sitting just out of arm’s reach of the food with those three-foot long spoons.

But as the old woman looked closer, she noticed that the people in heaven were plump and had rosy, happy faces. As she watched, a joyous sound of laughter filled the air.

And soon the old woman was laughing too, for now she understood the difference between heaven and hell for herself. The people in heaven were using those long spoons to feed each other.

If you found this a little glurgey and of questionable value in delivering an actual moral lesson, well, that makes at least two of us. But even if Wikipedia calls it an allegory, a metaphor can still be applicable in more than one domain.

My suggestion is that this story can be a metaphor for dealing with cognitive bias.

The idea is that there are some things that we can do for other people more easily than we can do them for ourselves. This isn’t garden-variety comparative advantage; this is the idea that sometimes we have a comparative disadvantage in dealing with something that affects us, specifically because it affects us instead of somebody else. This isn’t the case in most domains, but I think it may be the case in the domain of rationality. We all know that identifying skewed thinking from the inside is really hard, since many biases insidiously warp our thinking in such a way as to prevent us from seeing them.

One thing I’ve noticed is that occasionally, when I’m developing or expressing an opinion on something—particularly questions of political significance, in the “tribal politics” sense, but sometimes in other domains—I have this vague sense that my thought process might not be entirely trustworthy. It feels as though there’s something going on in my brain that shapes my beliefs around tribal affiliation, or some other bias, rather than correct reasoning. Unfortunately, this is where my self-awareness seems to end; pushing harder on this feeling doesn’t reveal any clues as to where the fault might lie.

According to the message that I most often see promoted in the rationality community, you must cultivate the extremely difficult skill of pushing through that feeling, seeing the distortions in your thought process for what they are, and fixing them—and that, while of course you can cultivate this skill alongside others, in the end, you are on your own. In this view, the ultimate goal is complete cognitive self-reliance.

I want to suggest a different, complementary approach: treating rationality as a social process.

If cognitive bias is causing me to say something obviously stupid about a particular topic, then other people are likely to notice what’s going on better than I am; indeed, if this weren’t the case, then the rationality community wouldn’t have been able to recognize recurring failure modes in domains like politics. So if that vague feeling comes into my brain, and I suspect that this is in fact what’s going on, might others be able to help me see through it? “Hey, I feel like idea X must be true, because argument Y, but I also feel like I’ve got a blind spot here and am failing to account for something obvious—does any of this sound wrong to you?”

It is better to find one fault in yourself than a thousand in someone else—but if finding a fault in someone else is more than a thousand times easier, then that implies the highest-expected-value thing to do is look for faults in each other.

(Especially since most of us are never going to be completely cognitively self-reliant no matter how hard we try, as even the most ardent rationality evangelists will acknowledge. And since, taking the outside view, most people who think that they are completely cognitively self-reliant are wrong.)

Of course, there are some obvious failure modes that rationality-as-a-social-process can fall into, and it won’t work in just any social context. In an environment where treating arguments as soldiers is already completely normalized, asking people who disagree with you to tell you how your opinions are biased isn’t going to bring you any closer to the truth. Sometimes you have to defect in the prisoner’s dilemma, so to speak. This implies that, if we care about finding truth, we should work to create spaces where this kind of constructive criticism is normalized, and participants in the discourse can have an expectation—backed up by social norms which are enforced in the usual ways—that a request for such criticism won’t be taken as an opportunity for an opposing “army” to gain ground without similarly subjecting itself to potential criticism. And the other big issue is trust; this whole process does no good unless I can take the critic’s assessment of my rationality seriously, which means I have to trust their rationality, as well as their good intentions.

Overall, despite the very real pitfalls, I think that the role of feedback from others in rationality is underappreciated, and that we who seek to overcome our biases would do well to rely more heavily on it. Of course, I could be totally wrong about this—but that’s what the comments section is for.

Escape from the Bottomless Pit

This is the sixth of six posts I’m writing for Effective Altruism Week.


The preceding five posts in this series were about issues in effective altruism where I have a pretty good idea where I stand. In contrast, I’m going to end the series by talking about a problem that I haven’t figured out how to solve.

About seven million children are going to die this year of preventable poverty-related causes. According to GiveWell’s cost-effectiveness estimates, it is possible to save the life of one of those children by donating approximately $3,340 to the Against Malaria Foundation.

So here are two possible options that hypothetically might be available to me:

  • Do nothing. If I take this option, about seven million kids will die this year.
  • Donate $3,340 to AMF. If I take this option, about seven million kids will die this year.

From this perspective, the only difference between those two scenarios is that I’m $3,340 poorer in the second one. This does not make the second one look very appealing.

Of course, in reality there’s another major difference: on average, a child’s life is saved in the second scenario. In absolute terms, the difference between a world where that child lives and a world where they die is huge. It’s an entire human life, with all its joys and accomplishments over the course of decades. It is easily worth far, far more than $3,340.

But I’m never going to meet that child, or know anything about them. And when I look at the effects on a larger scale, the fraction of the overall problem that I’ve solved is so minuscule as to not be noticeable. Literally nothing I can do is ever going to make a significant dent in global poverty.

If everybody made a significant personal sacrifice, then we could easily solve global poverty, and far more than that. Everybody would get to feel the satisfaction of knowing that they’d saved not one life, but millions upon millions. I think you could frame this in such a way that most people would see the sacrifice as worth it. But we don’t have any effective ability to coordinate a solution like that, so I can’t rely on everybody else going along with what I do; I have to decide alone.

The fundamental problem here is that the number of people affected by global poverty is large, and human brains are really, really bad at dealing with large numbers. We can see that visibly saving one person’s life is worth a personal sacrifice. We can see that making a significant dent in the overall problem of global poverty is worth a personal sacrifice. But when the life you save is invisible, and the dent insignificant? That’s harder for our brains to see as actually worth it.

Even though it totally is.

There are a couple of things that I can do for myself to help mitigate this problem. One of them is to remind myself that I’m not relying on the warm glow. If donating feels like throwing money into a bottomless pit, but I know that it saves lives on average and is the right thing to do, then that’s enough for me to get myself to do it. And that’s what actually counts.

Another thing I do is follow the work of organizations that do research, and are transparent enough about it that I feel like I have a good picture of what progress they’re making. GiveWell, with their regular blog posts and research updates, is the paragon of this (with GiveDirectly earning an honorable mention). Reading their material gives me a sense that we are, slowly but surely, making concrete progress towards actually solving global poverty and the other giant problems in this world.

But still, it’s a problem. It’s all in my head, but it’s still real. And I think other effective altruists struggle with it too. If anybody has any effective techniques or ways of looking at the problem that help make dealing with it easier, I’m all ears.

In Which I Advertise for GiveDirectly

This is the fifth of six posts I’m writing for Effective Altruism Week.


There are a lot of options when it comes to charity, even if you’re only considering the ones that are plausible candidates for the most good you can do.

But if you’re looking for a charity to donate to—maybe for tax reasons, maybe because your group won a dollar auction like happens at my university and you have to decide which charity to donate the proceeds to, maybe for some other reason—and you want a single recommendation whose appeal is easy to explain and that isn’t going to change in the foreseeable future, then the winner is GiveDirectly.

Of course, that’s quite a claim, and to some extent it’s just my arbitrary and biased opinion. But in this post, I hope to convince you that it’s the right answer.

GiveDirectly’s model is unconditional cash transfers. In other words, they find the world’s poorest people and give them money.

That’s it. That’s the whole process.

Well, okay, I can elaborate a little. GiveDirectly currently operates in two countries, Kenya and Uganda. In those countries, they go visit rural villages and then find the poorest people living in those villages. (How do they know who’s poorest? The poorest people live in houses with thatched roofs, because they can’t afford metal ones.) They sign those people up for accounts with M-PESA, a Kenyan mobile-phone-based payment system (yes, Kenya has its own electronic payment system and everybody there uses it; they use different systems in Uganda). They then transfer the money to them (about $1,000 per person over the course of a year) and give them instructions on how to withdraw it from a money agent. The whole time, GiveDirectly’s field agents are going around checking that everything’s going smoothly and collecting data on recipients’ experiences and how they’re using the money. The overall overhead ratio is less than 10%, meaning that over 90 cents of every dollar donated to GiveDirectly ends up in the pocket of a recipient.

These cash transfers are unconditional, meaning that recipients don’t have to do anything other than be poor in order to be eligible for them. Some programs in other countries (often government-sponsored ones) have experimented with conditional cash transfers, meaning that recipients have to do something like make sure their kids are attending school or get them vaccinated in order to get the money. These can be a useful incentive for behavior modification, but studies suggest that unconditional cash transfers work best for increasing recipients’ overall welfare as much as possible.

Why are cash transfers such a great idea? Because different people need different things. Many recipients use their cash transfers to buy metal roofs, which don’t leak all the time like thatched ones do, and also are cheaper to own in the long run because you don’t have be constantly repairing them. Many use them to address immediate needs such as food or medical expenses. Some use them for startup costs in some business venture, like buying a motorcycle and using it to sell transportation services to neighbors, or buying a power saw and using it to cut trees and make charcoal. There are a lot of other potential uses besides; with cash, recipients can determine for themselves what they need most and then spend it on that, obviating the need for outsiders to try to do so (and probably get it wrong). Evidence suggests that cash transfers usually aren’t spent on things like alcohol or tobacco.

Based on this logic, you could reasonably conclude that cash transfers don’t need to meet the same burden of scientific proof as other interventions. But GiveDirectly apparently didn’t get that memo, because the quality of their science is second to none. They extensively survey recipients (sometimes to the point of measuring things like cortisol levels) in order to ensure that their cash transfers are having the best possible effect. They also conduct randomized controlled trials—the gold standard of science—in order to learn more about the effects of cash transfers. In fact, their last large experiment was preregistered, meaning that the results would be published whether they were good or bad—a standard which is rarely met in any field, let alone one where scientific rigor is as often neglected as in charity.

Not only is GiveDirectly one of the most scientifically-minded global poverty charities out there, but they’re also one of the most transparent. I don’t just mean publishing a lot of metrics (although they do publish a lot of metrics, and that’s very important and by itself brings them far above what most of their peers are doing). I mean things like this bit from one of their blog posts:

We believe that stories and qualitative information can be informative and powerful, but their value depends on the manner in which they’re shared. When individual stories are shared without sufficient context into how they were chosen or how they compare to the average, they can create a misleading picture of how the organization uses donors’ dollars and the impact they ultimately have. This would be unfortunate for individual donors—and could be dangerous for policymakers and institutional funders, who fund and influence programs at massive scale. If we shared our favorite stories without any context, you might think, for example, that the woman who was able to afford eye surgery and see for the first time in two decades is representative of the average recipient.

Their solution? Pick recipients literally at random and publish stories about them.

If you’re not familiar with how most charities communicate with the public, you might not realize just how unusual this is. It is really, really unusual. Fanatical, even. Nobody does this. It goes against all the conventional wisdom about how to present yourself and what you’re doing in the charitable sector.

Considering how much money the conventional wisdom ends up sending to programs that don’t actually work, I’d say it’s long past time to try something different. And an increasing number of donors seem to agree, since GiveDirectly raised over $17 million last year.

Finally, there’s one other factor that I think makes GiveDirectly the best choice: the possibility for them to bring about systemic change.

GiveDirectly has successfully transformed cash transfers from a crazy idea into an intervention taken seriously by experts, and produced high-quality research to back it up. If they continue growing, and unconditional cash transfers continue to gain acceptance, then there’s hope that cash transfers come to be seen as the baseline benchmark which other global poverty interventions are compared against. If that comes to pass, then when, say, the U.S. government is considering funding some foreign-aid intervention, they might ask: Why do you think that this is a better idea than simply taking however much money this will cost and giving it directly to poor people?

There are a few programs out there that might be able to meet that bar. But most of them can’t. We’d end up spending charity and foreign-aid dollars a lot more efficiently—and that means saving and improving a lot more lives.

A Brief Overview of the Effective Charity Landscape, Part 2: Systemic Change

This is the fourth of six posts I’m writing for Effective Altruism Week.


There are basically two ways that a charity can do good in the world. The first way is to run some program—preferably an evidence-backed and cost-effective one—that directly saves or improves individual people’s lives. As I wrote yesterday, the best opportunities within this approach are those that serve the global poor. The advantage of this approach is that it offers concrete, tangible results that you can be confident in. For this reason, most donations by effective altruists go to these kinds of programs. But there’s a limit to how much good you can do this way—and there are other downsides as well.

The second way is to try to produce systemic change that makes things better on a large scale. This doesn’t have to be political, though it can be. This is a high-risk-high-reward approach; most attempts to do this don’t pan out, but some problems are so important that taking a risk on them is worth it. Generally speaking, this approach takes the form of either research (to understand some phenomenon that causes harm, and what might be done to solve it) or advocacy (to get solutions actually implemented in the real world).

A quick note on politics. Many important problems can be addressed only through political change, and many effective altruists are interested in finding ways to bring this change about. That said, most effective altruists don’t believe that supporting one side or the other in mainstream electoral politics is a very effective way of doing good. There are so many players exerting so much influence there that anything you do as an individual is likely to be counteracted by someone on the other side. Instead, we’re interested in finding overlooked opportunities for reform that don’t already have powerful entities fighting either for or against them. Lots of people have strong opinions on, say, whether income tax rates should be higher or lower, and any individual voice there is likely to be drowned out—but how many people are already yelling about, say, land use reform? Not very many—which is why a concerted effort there just might be an effective way to do a lot of good.

Most causes that effective altruists are concerned about can be classified in one of four major categories.

Global poverty

I wrote yesterday about ways to help poor individuals. But it might also be possible to help the global poor on a larger scale through policy change in America and other rich countries. The Open Philanthropy Project is currently researching ways that this might be possible. A few of the more intriguing possibilities include:

  • Labor mobility. How can you massively increase the income of someone living in a poor country? By letting them come to a rich one. Advocates suggest that enabling migration could create trillions of dollars per year in economic value, possibly as much as doubling world GDP—and most of this value would go to the world’s poorest people. Of course, there are risks, and more research is needed into the probable effects of policy changes in this area—not to mention how to overcome political resistance to such changes.
  • Reforming foreign aid. The United States currently gives $30 billion per year in aid to other countries—but this money isn’t spent as well as it might be. It could be valuable to research better ways to provide foreign aid, or to politically advocate for more of it (as is done in other rich countries; the U.S. gives only about two-thirds as much foreign aid per dollar of gross national income as the United Kingdom). Policies that affect foreign aid indirectly, such as agricultural subsidies, could also prove important.
  • Addressing climate change. It’s generally believed that the worst effects of climate change will be in poor countries in tropical regions, as rising temperatures may harm agriculture and induce extreme weather events that cause mass loss of life and economic damage. This is an area where there’s already so much advocacy that additional efforts are likely to be drowned out, but there may be underexplored facets of it that provide good opportunities, such as research into geoengineering.

Existential risk

There are a number of threats that humanity may face in the future, that have the potential to kill literally everyone if things go wrong. Arguably, the single most important thing we can do is to prevent this from happening.

Most of the truly dangerous risks are man-made, the result of recent technology. This isn’t surprising; nature didn’t wipe us out over the past millions of years, so it’s unlikely to suddenly change and do so tomorrow. Technology, on the other hand, changes fast and at an accelerating pace, and if we’re not careful there may not be time to deal with its negative effects before they become catastrophic.

Thus far, work on existential risk is still in the realm of relatively speculative research. The major organizations devoted to it are the Future of Humanity Institute at Oxford University, the Centre for the Study of Existential Risk at Cambridge University, and the Future of Life Institute at MIT. The Open Philanthropy Project is also studying existential risk. Specific risks currently being studied include nuclear war, biotechnology (which could be used to engineer an artificial pandemic), artificial general intelligence, molecular nanotechnology, and extreme climate change.

Animal welfare

Many effective altruists are concerned not only about humans, but about all sentient life. In particular, if your goal is to reduce suffering as much as possible, then it’s a good idea to include animal suffering, since animals substantially outnumber humans and factory farming causes a great deal of suffering.

I’m not writing extensively about this cause because I don’t have much specific knowledge of it. Animal Charity Evaluators recommends The Humane League, Mercy For Animals, and Animal Equality as effective charities working in this area. The Open Philanthropy Project is also researching prospects for addressing harms caused by factory farming.

Meta-charity

Charity recommendations don’t grow on trees. All of the research that I’ve written about in this and the last post was done by charities focusing specifically on finding out how do the most good.

GiveWell is by far the organization that has made the largest impact in identifying the best opportunities to do good. Each year, they issue recommendations for the best charities in the first category; most of the organizations mentioned in yesterday’s post come with their seal of approval. They also, in collaboration with Good Ventures, run the Open Philanthropy Project, which researches the kind of high-risk-high-reward opportunities that this post is about. Without GiveWell, almost none of the work I’ve written about here would exist and effective altruism could not have become the thriving high-impact movement that it is today. They deserve a round of applause.

Other organizations are devoted specifically to effective altruism outreach. Two of the more notable ones are Giving What We Can, which encourages people to pledge to devote 10% of their lifetime earnings to effective charities (I’m a member), and The Life You Can Save, which is based on the work of noted ethical philosopher Peter Singer and engages in more general outreach.