Probability: A Moment of Darkness

This was the Moment of Darkness speech that I gave at Boston Secular Solstice 2016.

Epistemic status: I spent several weeks thinking about this, but wrote it in a couple hours before the ceremony, because writing is aversive and I’m an inveterate procrastinator. Although I believe the claim about power laws to be true in some broad sense, this is based primarily on half-remembered “conventional wisdom” that I suspect I absorbed by cultural osmosis from the works of Nassim Nicholas Taleb. It is nowhere near as well-justified in the speech itself as it ought to be; the two statistics cited were the only ones I could find in the time available.

Our community is ambitious. We aim to do big things, to solve adult problems.

Four years ago, in New York City, in a ceremony much like this one, Ray Arnold, the creator of Secular Solstice, spoke about this. He said:

We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize everything ever. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.

And while they aren’t in this room, there are people we know who would be here if they could, who are doing their part to try and solve this whole death problem once and for all.

And I don’t know whether and how well any of us are going to succeed at any of these things, but…

God damn, people. You people are amazing, and even if only one of you made a dent in some of the problems you’re working on, that… that would just be incredible.

Indeed it would. It would be incredible. And I believe that the same is true of the people in this room today, four years later. We recognize that these things matter, that they matter more than most of society recognizes, more even than any of us can really visualize. The utilitarian significance of even the least of those problems is easily in the millions of lives.

But we are also a community of truth seekers. It’s not enough for the stories we tell ourselves about what we’re going to accomplish to be motivating; they have to be accurate. So, what should we expect when we set out to change the world?

Large scale technological and other change in society tends to obey power laws; a small minority of individual cases wind up having the vast majority of the impact. Y Combinator, the most prestigious startup accelerator in the world, has funded about a thousand startups; about three-quarters of their returns have come from just two of these. Making a big profit is a crude proxy for impact through a startup, but if we run with it, that gives us a probability of 0.3%. And that’s one of the higher-risk-higher-reward life paths available to someone today.

I didn’t have time to find more numbers, but as far as I know, they basically all look roughly like this. According to a study by the Financial Times [PDF], 40% of millennials think they’ll have a global impact. In other words, almost 40% of millennials are extremely miscalibrated.

So we see that truly changing the world is a rare event. But that, too, is part of the standard story; because it’s rare, you’ll have to work very hard and be very resourceful to pull it off. But you can still do it if you try, right?

This is the kind of thing that our brains tell us because of availability bias. As Bruce Schneier put it: “We tend to exaggerate [the probability of] spectacular, strange and rare events, and downplay ordinary, familiar and common ones.” He was talking about things like terrorist attacks, but it’s just as true of things like making a major scientific breakthrough. “Stories engage us at a much more visceral level [than data], especially stories that are vivid, exciting or personally involving.”

In Terry Pratchett’s Discworld, “million-to-one chances crop up nine times out of ten”. It’s not any different from other fiction in this regard; it’s just more honest. Nonfiction isn’t much better because of selection bias; you hear about the guy who won the Nobel, and not about the hundreds of thousands of others who set out to win it and failed.

Eliezer Yudkowsky summed up the problem in “A Technical Explanation of Technical Explanations”:

Remember Spock from Star Trek? Spock often says something along the lines of, “Captain, if you steer the Enterprise directly into a black hole, our probability of survival is only 2.837%.” Yet nine times out of ten the Enterprise is not destroyed. The people who write this stuff have no idea what scientists mean by “probability”. They suppose that a probability of 99.9% is something like feeling really sure. They suppose that Spock’s statement expresses the challenge of successfully steering the Enterprise through a black hole, like a video game rated five stars for difficulty. What we mean by “probability” is that if you utter the words “two percent probability” on fifty independent occasions, it better not happen more than once.

(At least, not in the limit.)

There are fewer than fifty of us in this room, and the probability of an individual achieving the kinds of ambitions we’re talking about here is probably lower than two percent.

So that’s the problem: if you want to change the world, the outside view says it’s probably going to be all for nothing. Any attempt to do so in a way that avoids that problem is unlikely to have a large individual impact.

This problem is so omnipresent in any attempt to do something big that it’s common to just ignore it as background noise. We shouldn’t. We seek the truth because avoiding it inevitably leads to failure when dealing with hard problems. If we’re serious about solving them, we have to face this harsh reality head-on.

So what do we do about it?

One option is to give up, and stick to easy things that are likely to work. You can optimize for your own happiness pretty well that way. But this is no solution at all; we have to do the hard things because, as the Comet King said, “Somebody has to and no one else will.” At least, not on the margin.

Another option is unjustified optimism. This seems to be unfortunately common in our community. Some higher-level rationalists have implemented an advanced form of this where they make their monkey brains believe one thing even as they know the truth to be something else. This might work, if you can pull it off.

A third option is to forcibly suppress your monkey brain’s risk-aversion and submit yourself in obedience to the power laws: spend your life doing the risky thing, knowing that it will probably all be for naught, because it’s the right thing to do. If you’re working on more incremental things like malaria nets, this can work out okay. You can be like the man who throws the starfish back into the sea every day; others may say it doesn’t matter because more will always wash up, but it mattered to that one. If you’re aiming for something more all-or-nothing, like getting AGI right, then this may be psychologically harder.

But there’s one other option, and I think this is the one I like best: Embrace the community.

This works because of two reasons. The first is that, when you’re in a community of people with similar goals, you don’t have to do everything yourself. You can clerk the office that handles the funding, or host the convention that summons the family. You can be one of the people who makes the future turn out okay, even if you’re not the one-in-a-million who makes the scientific breakthrough that fixes everything.

The second reason is that our community might actually be awesome enough to beat the outside-view odds.

Four years ago, when Ray spoke those words, it didn’t necessarily look that way. That was around the time I found the community online, though I did not meet most of you until later. To be sure, some interesting things had already happened by then, and maybe the future will look back and see some of them as the beginning of the Great Change (or maybe not). But in those past four years, I’ve been lucky enough to watch us grow up, and start accumulating a track record of changing the world in real ways.

We brought AI risk from a topic argued about by nerds on internet forums to an issue taken seriously by the world’s foremost experts in the relevant fields, by billionaire influencers like Bill Gates and Elon Musk, and, as of last October, by the sitting President of the United States.

We solved the problem of figuring out which charities actually work for rescuing people from global poverty and moved hundreds of millions of dollars to them, and that number’s only going to grow.

With help from allied coalitions, we got giant corporations to dump the worst offenders of factory farming, banned confinement cages right here in Massachusetts, and brought funding and attention to an industry that has invented the most convincing meat substitute that has ever existed.

Not everything we’ve attempted has yet borne such undeniable fruit, and we’ve had our share of failures. But I think we’ve earned the right to call ourselves a community that’s capable of getting results, and I think there’ll be more soon, including in areas we haven’t even thought of yet.

The law of the universe is that you can’t beat the odds. But conditional on the right things? Yeah, I think you can.

Should We Partition Effective Altruist Spaces?

On Tumblr, Bartlebyshop brings up concerns that she’s had with effective altruist writing that have kept her away from it. I don’t think this is an isolated case; I think it indicates a problem that this community has been having.

Supporting and discussing EA encompasses a lot of different things. Some of those things are controversial even within the movement, let alone among society at large. Occasionally, one of these controversies metastasizes and takes on a visibility far out of proportion to its actual importance among most EAs. Within the last couple of weeks, we’ve had this happen with meat at EA Global, with Nick Bostrom’s astronomical waste argument and the role of AI risk within the movement, with the idea that EAs don’t care about art, and I think I’m missing a few things.

Obviously some of this is the unavoidable result of universal human political dynamics, but I do think that, to a certain extent, this is something that we ought to be trying to fix. EAs are disproportionately likely to be the kinds of people who love to argue about things, and it’s important to do so in order to find the most effective things to do, but it can also be tiring when EA spaces have been temporarily taken over by the latest iteration of some perennial argument that you’re not interested in. It may be worth asking ourselves if this is something that could be ameliorated with better social technology.

Although some online EA spaces are devoted to a particular cause or focus area, most of them are fairly general-purpose in terms of what kinds of discussions happen there. With the benefit of hindsight, I think it might have been a good idea to instead set up different spaces for different types of discussions.

In particular, four kinds of discussions come to mind, each of which might do best in its own space:

  • Philosophical arguing. This would be where people could talk about things like consequentialism/metaethics, the drowning child argument, the moral value of the far future and of animals, the relative importance of EA’s major focus areas, etc. Right now, this is the area that I think most needs to be contained; a lot of people are allergic to these kinds of arguments, both because they often lead to extreme conclusions and because they’ve been going on forever and probably aren’t going to be solved to everyone’s satisfaction anytime soon. At the same time, we want these discussions to be part of EA; they are a major reason why many EAs are EAs, and a major guiding force in many EAs’ donation decisions and life decisions in general.
  • Comparative analysis of causes and charities. I confess that this is the area that I personally am most interested in, and I sometimes think that the other areas have driven this one largely out of sight, except at a few blogs like GiveWell’s. (Of course, this is probably because many people find this kind of analysis boring, and that’s fine; we want to have something for everyone.) In spaces devoted to this, we’d have discussions of research by organizations like GiveWell and OpenPhil and ACE, and of what interventions and causes are most promising. (We’d want this to focus on analysis that doesn’t depend on, or that explicitly conditions on, extremely deep value judgments of the kind argued over in the “philosophical arguing” section.)
  • Mutual support. This would be a place for people who’ve allowed EA to shape their lives—by donating a significant percentage of their income, or going vegetarian, or choosing an effectiveness-oriented career path—to talk about their shared life experiences as EAs, and to request and give advice. I think that Giving What We Can has a comparative advantage here, and also that a lot of important discussions in this sphere are happening on blogs like The Unit of Caring on Tumblr.
  • Community organizing. Here we’ve got things like .impact, discussions among meetup organizers, and other sorts of meta-concerns. To a large extent, these things already happen in their own special-purpose spaces, but there might still be more of them in general-purpose spaces than is ideal.

Of course, since the established EA spaces already exist and we can’t change the past, there isn’t room for a clean partition. However, I do wonder if there’s anything that individuals within EA communities can do to move things in this general direction, and whether doing so is likely to be a good idea.


The Dark Art of Causal Self-Modeling

Background: Vaguely inspired by “On Measuring Tradeoffs in Effective Altruism”, by Ozy Frantz on Thing of Things.

Content warning: Scrupulosity.

Causal modeling is a way of trying to predict the consequences of something that you might do or that might happen, based on cause-and-effect relationships. For instance, if I drop a bowling ball while I’m holding it in front of me, gravity will cause it to accelerate downward until it lands on my foot. This, in turn, will cause me to experience pain. Since I don’t want that, I can infer from this causal model that I should not drop the bowling ball.

People are predictable in certain ways; consequently, other people’s actions and mental states can be included in a causal model. For example, if I bake cookies for my friend who likes cookies, I predict that this will cause her to feel happy, and that will cause her to express her appreciation. Conversely, if I forget her birthday, I predict that this will cause her to feel interpersonally neglected.

All of this is obvious; it’s what makes social science work (not to mention advertising, competitive games, military strategy, and so forth).

But what happens when you include your own actions as effects in a causal model? Or your own mental states as causes in it?

We can come up with trivial examples: If I’m feeling hungry, I predict that this will cause me to go to the kitchen and make a snack. But this doesn’t really tell me anything useful; if that situation comes up, this analysis plays no part in my decision to make a snack. I just do it because I’m hungry, not because I know that my hunger causes me to do it. (Of course, the reason I do it is because I predict that, if I make a snack and eat it, I will stop being hungry; and that causal model does play an important role in my decision. But in that case, my actions are the cause and my mental state is the effect, whereas for the purposes of this post I’m interested in causal models where the reverse is true.)

Here’s a nontrivial example: If I take a higher-paying job that requires me to move to a city where I have no friends (and don’t expect to easily make new ones) in order to donate the extra income to an effective charity, I predict that this will cause me to feel lonely and demoralized, which will cause me to resent the ethical obligations towards charity that I’ve chosen to adopt. This will make me less positively inclined towards effective altruism and less likely to continue donating.

This wasn’t entirely hypothetical (and I did not in fact take the higher-paying job, opting instead to stay in my preferred city). Furthermore, I see effective altruists frequently use this sort of argument as a rationale for not making sacrifices that are larger than they are willing to make. (Such as taking a job you don’t like, or capping your income, or going vegan, or donating a kidney.)

I believe that we ought to be more careful about this than we currently are. Hence the title of this post.

The thing about these predictions is that they can become self-fulfilling prophecies. At the end of the day, you’re the one who decides your actions. If you give yourself an excuse not to ask “okay, but what if I did the thing anyway?” then you’re more likely to end up deciding not to do the thing. Which may have been the desired outcome all along—I really didn’t want to move—but if you’re not honest with yourself about your reasons for doing what you’re doing, that can screw over your future decision-making process. Not to mention that the thing you’re not doing, may, in fact, have been the right thing to do. Maybe you even knew that it was.

(The post linked at the top provides a framework which mitigates this problem a bit in the case of effective altruism, but doesn’t eliminate it. You still have to defend your decision not to increase your total altruism budget in a given category—or overall, if you go with one of the alternative approaches Ozy mentions in passing that involve quantifying and offsetting the value of an action.)

But the other thing about the Dark Arts is that sometimes we need to use them. In the case of causal self-modeling, that’s because sometimes your causal self-model is accurate. If I devoted all my material and mental resources to effective altruism, I probably really would burn out quickly.

The thing about that assessment is that it’s based on the outside view, not on my personal knowledge of my own psyche. This provides a defense against this kind of self-deception.

Similarly, a valuable question to ask is: To what extent is the you who’s making this causal model right now, the same you as the you who’s going to make the relevant decisions? This is how I justify my decision not to go vegan at this time. I find it difficult to find foods that I like, and I predict that if I stopped eating dairy I would eat very poorly and my health would take a turn for the worse. That would be the result of in-the-moment viscerally-driven reactions that I can predict, but not control, from here while making my far-mode decision.

So in the end, we do have to make these kinds of models, and there are ways to protect ourselves from bias while doing so. But we should never forget that it’s a dangerous game.

Further reading: “Dark Arts of Rationality”, by Nate Soares on Less Wrong.

Escape from the Bottomless Pit

This is the sixth of six posts I’m writing for Effective Altruism Week.

The preceding five posts in this series were about issues in effective altruism where I have a pretty good idea where I stand. In contrast, I’m going to end the series by talking about a problem that I haven’t figured out how to solve.

About seven million children are going to die this year of preventable poverty-related causes. According to GiveWell’s cost-effectiveness estimates, it is possible to save the life of one of those children by donating approximately $3,340 to the Against Malaria Foundation.

So here are two possible options that hypothetically might be available to me:

  • Do nothing. If I take this option, about seven million kids will die this year.
  • Donate $3,340 to AMF. If I take this option, about seven million kids will die this year.

From this perspective, the only difference between those two scenarios is that I’m $3,340 poorer in the second one. This does not make the second one look very appealing.

Of course, in reality there’s another major difference: on average, a child’s life is saved in the second scenario. In absolute terms, the difference between a world where that child lives and a world where they die is huge. It’s an entire human life, with all its joys and accomplishments over the course of decades. It is easily worth far, far more than $3,340.

But I’m never going to meet that child, or know anything about them. And when I look at the effects on a larger scale, the fraction of the overall problem that I’ve solved is so minuscule as to not be noticeable. Literally nothing I can do is ever going to make a significant dent in global poverty.

If everybody made a significant personal sacrifice, then we could easily solve global poverty, and far more than that. Everybody would get to feel the satisfaction of knowing that they’d saved not one life, but millions upon millions. I think you could frame this in such a way that most people would see the sacrifice as worth it. But we don’t have any effective ability to coordinate a solution like that, so I can’t rely on everybody else going along with what I do; I have to decide alone.

The fundamental problem here is that the number of people affected by global poverty is large, and human brains are really, really bad at dealing with large numbers. We can see that visibly saving one person’s life is worth a personal sacrifice. We can see that making a significant dent in the overall problem of global poverty is worth a personal sacrifice. But when the life you save is invisible, and the dent insignificant? That’s harder for our brains to see as actually worth it.

Even though it totally is.

There are a couple of things that I can do for myself to help mitigate this problem. One of them is to remind myself that I’m not relying on the warm glow. If donating feels like throwing money into a bottomless pit, but I know that it saves lives on average and is the right thing to do, then that’s enough for me to get myself to do it. And that’s what actually counts.

Another thing I do is follow the work of organizations that do research, and are transparent enough about it that I feel like I have a good picture of what progress they’re making. GiveWell, with their regular blog posts and research updates, is the paragon of this (with GiveDirectly earning an honorable mention). Reading their material gives me a sense that we are, slowly but surely, making concrete progress towards actually solving global poverty and the other giant problems in this world.

But still, it’s a problem. It’s all in my head, but it’s still real. And I think other effective altruists struggle with it too. If anybody has any effective techniques or ways of looking at the problem that help make dealing with it easier, I’m all ears.

In Which I Advertise for GiveDirectly

This is the fifth of six posts I’m writing for Effective Altruism Week.

There are a lot of options when it comes to charity, even if you’re only considering the ones that are plausible candidates for the most good you can do.

But if you’re looking for a charity to donate to—maybe for tax reasons, maybe because your group won a dollar auction like happens at my university and you have to decide which charity to donate the proceeds to, maybe for some other reason—and you want a single recommendation whose appeal is easy to explain and that isn’t going to change in the foreseeable future, then the winner is GiveDirectly.

Of course, that’s quite a claim, and to some extent it’s just my arbitrary and biased opinion. But in this post, I hope to convince you that it’s the right answer.

GiveDirectly’s model is unconditional cash transfers. In other words, they find the world’s poorest people and give them money.

That’s it. That’s the whole process.

Well, okay, I can elaborate a little. GiveDirectly currently operates in two countries, Kenya and Uganda. In those countries, they go visit rural villages and then find the poorest people living in those villages. (How do they know who’s poorest? The poorest people live in houses with thatched roofs, because they can’t afford metal ones.) They sign those people up for accounts with M-PESA, a Kenyan mobile-phone-based payment system (yes, Kenya has its own electronic payment system and everybody there uses it; they use different systems in Uganda). They then transfer the money to them (about $1,000 per person over the course of a year) and give them instructions on how to withdraw it from a money agent. The whole time, GiveDirectly’s field agents are going around checking that everything’s going smoothly and collecting data on recipients’ experiences and how they’re using the money. The overall overhead ratio is less than 10%, meaning that over 90 cents of every dollar donated to GiveDirectly ends up in the pocket of a recipient.

These cash transfers are unconditional, meaning that recipients don’t have to do anything other than be poor in order to be eligible for them. Some programs in other countries (often government-sponsored ones) have experimented with conditional cash transfers, meaning that recipients have to do something like make sure their kids are attending school or get them vaccinated in order to get the money. These can be a useful incentive for behavior modification, but studies suggest that unconditional cash transfers work best for increasing recipients’ overall welfare as much as possible.

Why are cash transfers such a great idea? Because different people need different things. Many recipients use their cash transfers to buy metal roofs, which don’t leak all the time like thatched ones do, and also are cheaper to own in the long run because you don’t have be constantly repairing them. Many use them to address immediate needs such as food or medical expenses. Some use them for startup costs in some business venture, like buying a motorcycle and using it to sell transportation services to neighbors, or buying a power saw and using it to cut trees and make charcoal. There are a lot of other potential uses besides; with cash, recipients can determine for themselves what they need most and then spend it on that, obviating the need for outsiders to try to do so (and probably get it wrong). Evidence suggests that cash transfers usually aren’t spent on things like alcohol or tobacco.

Based on this logic, you could reasonably conclude that cash transfers don’t need to meet the same burden of scientific proof as other interventions. But GiveDirectly apparently didn’t get that memo, because the quality of their science is second to none. They extensively survey recipients (sometimes to the point of measuring things like cortisol levels) in order to ensure that their cash transfers are having the best possible effect. They also conduct randomized controlled trials—the gold standard of science—in order to learn more about the effects of cash transfers. In fact, their last large experiment was preregistered, meaning that the results would be published whether they were good or bad—a standard which is rarely met in any field, let alone one where scientific rigor is as often neglected as in charity.

Not only is GiveDirectly one of the most scientifically-minded global poverty charities out there, but they’re also one of the most transparent. I don’t just mean publishing a lot of metrics (although they do publish a lot of metrics, and that’s very important and by itself brings them far above what most of their peers are doing). I mean things like this bit from one of their blog posts:

We believe that stories and qualitative information can be informative and powerful, but their value depends on the manner in which they’re shared. When individual stories are shared without sufficient context into how they were chosen or how they compare to the average, they can create a misleading picture of how the organization uses donors’ dollars and the impact they ultimately have. This would be unfortunate for individual donors—and could be dangerous for policymakers and institutional funders, who fund and influence programs at massive scale. If we shared our favorite stories without any context, you might think, for example, that the woman who was able to afford eye surgery and see for the first time in two decades is representative of the average recipient.

Their solution? Pick recipients literally at random and publish stories about them.

If you’re not familiar with how most charities communicate with the public, you might not realize just how unusual this is. It is really, really unusual. Fanatical, even. Nobody does this. It goes against all the conventional wisdom about how to present yourself and what you’re doing in the charitable sector.

Considering how much money the conventional wisdom ends up sending to programs that don’t actually work, I’d say it’s long past time to try something different. And an increasing number of donors seem to agree, since GiveDirectly raised over $17 million last year.

Finally, there’s one other factor that I think makes GiveDirectly the best choice: the possibility for them to bring about systemic change.

GiveDirectly has successfully transformed cash transfers from a crazy idea into an intervention taken seriously by experts, and produced high-quality research to back it up. If they continue growing, and unconditional cash transfers continue to gain acceptance, then there’s hope that cash transfers come to be seen as the baseline benchmark which other global poverty interventions are compared against. If that comes to pass, then when, say, the U.S. government is considering funding some foreign-aid intervention, they might ask: Why do you think that this is a better idea than simply taking however much money this will cost and giving it directly to poor people?

There are a few programs out there that might be able to meet that bar. But most of them can’t. We’d end up spending charity and foreign-aid dollars a lot more efficiently—and that means saving and improving a lot more lives.

A Brief Overview of the Effective Charity Landscape, Part 2: Systemic Change

This is the fourth of six posts I’m writing for Effective Altruism Week.

There are basically two ways that a charity can do good in the world. The first way is to run some program—preferably an evidence-backed and cost-effective one—that directly saves or improves individual people’s lives. As I wrote yesterday, the best opportunities within this approach are those that serve the global poor. The advantage of this approach is that it offers concrete, tangible results that you can be confident in. For this reason, most donations by effective altruists go to these kinds of programs. But there’s a limit to how much good you can do this way—and there are other downsides as well.

The second way is to try to produce systemic change that makes things better on a large scale. This doesn’t have to be political, though it can be. This is a high-risk-high-reward approach; most attempts to do this don’t pan out, but some problems are so important that taking a risk on them is worth it. Generally speaking, this approach takes the form of either research (to understand some phenomenon that causes harm, and what might be done to solve it) or advocacy (to get solutions actually implemented in the real world).

A quick note on politics. Many important problems can be addressed only through political change, and many effective altruists are interested in finding ways to bring this change about. That said, most effective altruists don’t believe that supporting one side or the other in mainstream electoral politics is a very effective way of doing good. There are so many players exerting so much influence there that anything you do as an individual is likely to be counteracted by someone on the other side. Instead, we’re interested in finding overlooked opportunities for reform that don’t already have powerful entities fighting either for or against them. Lots of people have strong opinions on, say, whether income tax rates should be higher or lower, and any individual voice there is likely to be drowned out—but how many people are already yelling about, say, land use reform? Not very many—which is why a concerted effort there just might be an effective way to do a lot of good.

Most causes that effective altruists are concerned about can be classified in one of four major categories.

Global poverty

I wrote yesterday about ways to help poor individuals. But it might also be possible to help the global poor on a larger scale through policy change in America and other rich countries. The Open Philanthropy Project is currently researching ways that this might be possible. A few of the more intriguing possibilities include:

  • Labor mobility. How can you massively increase the income of someone living in a poor country? By letting them come to a rich one. Advocates suggest that enabling migration could create trillions of dollars per year in economic value, possibly as much as doubling world GDP—and most of this value would go to the world’s poorest people. Of course, there are risks, and more research is needed into the probable effects of policy changes in this area—not to mention how to overcome political resistance to such changes.
  • Reforming foreign aid. The United States currently gives $30 billion per year in aid to other countries—but this money isn’t spent as well as it might be. It could be valuable to research better ways to provide foreign aid, or to politically advocate for more of it (as is done in other rich countries; the U.S. gives only about two-thirds as much foreign aid per dollar of gross national income as the United Kingdom). Policies that affect foreign aid indirectly, such as agricultural subsidies, could also prove important.
  • Addressing climate change. It’s generally believed that the worst effects of climate change will be in poor countries in tropical regions, as rising temperatures may harm agriculture and induce extreme weather events that cause mass loss of life and economic damage. This is an area where there’s already so much advocacy that additional efforts are likely to be drowned out, but there may be underexplored facets of it that provide good opportunities, such as research into geoengineering.

Existential risk

There are a number of threats that humanity may face in the future, that have the potential to kill literally everyone if things go wrong. Arguably, the single most important thing we can do is to prevent this from happening.

Most of the truly dangerous risks are man-made, the result of recent technology. This isn’t surprising; nature didn’t wipe us out over the past millions of years, so it’s unlikely to suddenly change and do so tomorrow. Technology, on the other hand, changes fast and at an accelerating pace, and if we’re not careful there may not be time to deal with its negative effects before they become catastrophic.

Thus far, work on existential risk is still in the realm of relatively speculative research. The major organizations devoted to it are the Future of Humanity Institute at Oxford University, the Centre for the Study of Existential Risk at Cambridge University, and the Future of Life Institute at MIT. The Open Philanthropy Project is also studying existential risk. Specific risks currently being studied include nuclear war, biotechnology (which could be used to engineer an artificial pandemic), artificial general intelligence, molecular nanotechnology, and extreme climate change.

Animal welfare

Many effective altruists are concerned not only about humans, but about all sentient life. In particular, if your goal is to reduce suffering as much as possible, then it’s a good idea to include animal suffering, since animals substantially outnumber humans and factory farming causes a great deal of suffering.

I’m not writing extensively about this cause because I don’t have much specific knowledge of it. Animal Charity Evaluators recommends The Humane League, Mercy For Animals, and Animal Equality as effective charities working in this area. The Open Philanthropy Project is also researching prospects for addressing harms caused by factory farming.


Charity recommendations don’t grow on trees. All of the research that I’ve written about in this and the last post was done by charities focusing specifically on finding out how do the most good.

GiveWell is by far the organization that has made the largest impact in identifying the best opportunities to do good. Each year, they issue recommendations for the best charities in the first category; most of the organizations mentioned in yesterday’s post come with their seal of approval. They also, in collaboration with Good Ventures, run the Open Philanthropy Project, which researches the kind of high-risk-high-reward opportunities that this post is about. Without GiveWell, almost none of the work I’ve written about here would exist and effective altruism could not have become the thriving high-impact movement that it is today. They deserve a round of applause.

Other organizations are devoted specifically to effective altruism outreach. Two of the more notable ones are Giving What We Can, which encourages people to pledge to devote 10% of their lifetime earnings to effective charities (I’m a member), and The Life You Can Save, which is based on the work of noted ethical philosopher Peter Singer and engages in more general outreach.

A Brief Overview of the Effective Charity Landscape, Part 1: Global Poverty

This is the third of six posts I’m writing for Effective Altruism Week.

As I mentioned in previous posts, effective altruists look for the best opportunities that we can find to do the most good. So what are those?

By far the most popular causes among effective altruists are those connected to global poverty. Over a billion people currently live in extreme poverty, which is generally defined as living on less than $1.25 per day. Approximately 19,000 children die every day of preventable diseases and other poverty-related causes. This is the great humanitarian crisis of our time, and it’s within our power to help.

Most of the best global poverty charities are focused on health interventions. A lot of people have had various bright ideas about how to alleviate extreme poverty, and a lot of those bright ideas ended up not panning out for whatever reason. But saving people from preventable diseases has generally worked pretty well.

According to the best research we have so far, the interventions that work best are:

  • Malaria nets. Malaria is one of the biggest killers in poor tropical and subtropical countries, especially in Africa. It’s caused by a blood parasite that’s spread by mosquito bites, so malaria infections can be prevented by sleeping under bednets treated with insecticide. The best life-saving health intervention that we know of is to distribute these nets to poor people living in malaria-affected regions. There are a number of charities working in this space, but by far the best is the Against Malaria Foundation, one of GiveWell’s most highly recommended charities. If your goal is to save human lives in the near term with the maximum possible cost-effectiveness, they’re the ones to donate to.
  • Deworming. Many neglected tropical diseases, most notably schistosomiasis, are caused by parasitic worms. Among diseases caused by parasites, the death toll of schistosomiasis is second only to malaria, and children who survive it often suffer from stunted development. Schistosomiasis and similar diseases can be prevented with inexpensive deworming pills. Two of GiveWell’s top charities are focused on deworming: the Schistosomiasis Control Initiative, which directly runs deworming programs in sub-Saharan Africa, and the Deworm the World Initiative, which works primarily in India and provides technical assistance and advocacy for government-run deworming programs there.
  • Micronutrient fortification. In rich countries, staple foods are routinely fortified with micronutrients; iodized salt is a good example. People in many poor countries don’t benefit from this, and end up suffering from micronutrient deficiency disorders, which are especially harmful for children’s development (for example, iodine deficiency is a leading cause of preventable intellectual disability in Africa and Southeast Asia). Micronutrient fortification is actually quite cheap if you do it at scale; as such, most of the best charities here work with national governments to set up fortification programs for entire populations. GiveWell notes the Iodine Global Network (IGN) and the Global Alliance for Improved Nutrition (GAIN) as standout organizations working specifically on salt iodization; Giving What We Can recommends Project Healthy Children, which works on a broader set of fortification programs.
  • There are a few less-studied interventions that show promise; GiveWell has a list of ones that they’ve looked at. Two other GiveWell standout charities are Development Media International, which produces radio and television broadcasts to promote good health practices in the developing world, and Living Goods, which supports a network of people in sub-Saharan Africa selling health products within their own communities. The Life You Can Save also has a list of recommended charities, most of which are focused on global health.

There’s also one well-studied global poverty intervention that isn’t specifically about health: cash transfers. How do you help poor people? By giving them money. The major player in the space of unconditional cash transfers is GiveDirectly, which is also recommended by GiveWell and which I’ll be writing more about in a later post.

Beyond the Warm Glow

This is the second of six posts I’m writing for Effective Altruism Week.

You often hear about the warm glow that comes from giving to those in need. It’s embedded in most of the rhetoric and conventional wisdom about charity. It’s been verified by scientific studies. There are even catchy show tunes about it. It’s probably safe to say that the warm glow is responsible for a great part of all the charitable activity that humans have ever engaged in.

As far as I can remember, I’ve never experienced it in my life.

I don’t really know why. It’s just the way I am.

For someone concerned about altruism, this might sound like a cause for alarm. But it needn’t be. Because the warm glow isn’t the be-all and end-all of charity. Remember, the goal isn’t to feel good—the goal is to do good.

The warm glow is powerful. But it’s also dangerous. The activities and interventions that feel good often aren’t the most effective ones for making the world a better place. Donors seeking the warm glow—and charities seeking those donors—may be lured towards programs that don’t work as well.

This is not a hypothetical concern. Most international charities have this problem to some extent or another. The message has to be optimized to make it sound as though something like a particular family getting a particular cow can be attributed directly to your donation. The truth tends to be more complicated—and after looking past this donor illusion, one might wonder whether a cow is the thing that family needs most at all.

So if the warm glow doesn’t motivate me, what does? Well, that’s really two different questions.

First, why have I donated to charities serving the global poor, and pledged to continue doing so in future? Quite simply, because I looked at the arguments and at the numbers, and concluded that it was the right thing to do.

This is not a very exciting answer. There’s no real emotion in it. But in my particular case, it happens to be true.

I can’t apply this dispassionate moral logic everywhere in my life. It’d be nice if I could, but I’m only human and there are plenty of times when I don’t do the right thing, even if in theory I know what the right thing is. In this particular domain, though, I’ve found that—again, for me personally—it works.

And in the end, as long as you do the right thing, it doesn’t much matter how you get there—so figure out what route gets you there, and then take it.

The second, and more interesting question, is this: What motivates me to advocate for effective altruism? Why am I interested in this idea?

That part is not a matter of dispassionate analysis for me. Quite the opposite—I care passionately about effective altruism. It might be the most exciting idea I’ve yet encountered.

Everyone wants to make the world a better place. And lots of people say they’re making the world a better place. But if you’re a person who’s inclined to be skeptical, the thousandth person asking you to support their pet cause—without demonstrating that it’s really better than the others you might be supporting—starts to wear thin.

But when someone comes along who’s actually waded through the chaos, not seeking to push a particular pet intervention but open to whatever might do the most good? And they’ve actually succeeded at finding opportunities that we could reasonably call the best there are? And they’ve shown their work, so you don’t have to take their word for it? And they’re continuing to actively seek opportunities that are even better than those?

That’s exciting. That’s what makes me believe that we have a shot at actually fixing this messed-up world.

That’s what makes me want to be an effective altruist.

Winning at Charity

This is the first of six posts I’m writing for Effective Altruism Week.

Effective altruism is a new idea that’s bringing about a revolution in charitable giving. This post is about why that’s a good idea—starting from the beginning.

The practice of charity is older than dirt; humans are social animals, and there have always been times when individual humans acted for the benefit of others. This behavior might have evolved so that we’d help our biological relatives, or so that those who might someday be able to help us in turn—but the human tendency towards altruism has long been broader than that. Even ancient religions codified the practice of alms-giving, showing that they valued helping the less fortunate for the sake of doing so.

This is how charity began—within the community. But over time, the world has become more interconnected. For an early human, the entire world was only about 150 people, but those hunter-gatherer bands gave way to larger villages, which gave way to cities. The spread of commerce brought distant corners of a civilization closer together. The printing press enabled the widespread dissemination of information, and the carrack opened the Age of Sail and closed the distance between the continents. Today’s transportation and communications technology has brought your community of 150 people up to seven billion and counting.

Seven billion—and all with their own needs and desires, coming from all different cultures and economic situations, and frustrated by myriad individual and social ills. In response to their needs, a dizzying array of charitable causes has arisen: from the classic assistance to the local poor, to aiding victims of war and famine and pestilence, to scientific research and the arts, and countless others besides.

That basic social instinct is still there; most of us want to do our part to help. But you can’t help them all. Your resources, your time and money, are limited. How do you choose where best to direct them?

The traditional answer was to help those in your community—because traditionally, that was all there was. In a more interconnected world like ours, you wouldn’t necessarily choose a cause that was geographically close to you (though you might well do that), but you’d likely choose one that you were somehow connected to. After all, what else would you do?

Perhaps, at some point in past centuries, as the world was growing more interconnected, some bright forward-looking soul might have wondered if this was the best we could do. Might it be possible to go beyond simply giving time or money to some cause or another? Might it be possible to sift through all the causes and charities, compare them, and find the ones which offer us the opportunity to do the most good with our limited resources? Might it be possible, in short, to start playing to win in the game of charity?

After all, surely some charitable opportunities are better than others. And some philosophers, such as the founders of utilitarianism, had started to think thoughts in this direction—that one should aim to do the greatest good for the greatest number, as that philosophy is commonly summarized. But for most of human history, the answer to these questions was no.

Because good intentions aren’t good enough, if you want to be truly effective, and not just able to say that you did your part. You need an understanding of those individual and social ills that prevent human flourishing, an understanding that can only come from science—both natural science, and social sciences like economic and political theory (to account for the roles of markets and governments in people’s lives). You need sophisticated statistical methods, in order to measure and quantify the impact that charities have, and compare them side by side. And you need that transportation and communications technology, to actually reach the people you’re trying to help, and to coordinate enough people from all over the world to start making a difference.

For most of human history, we didn’t have that stuff. But now isn’t most of human history.

In the early 21st century, some bright forward-looking souls—a mixture of financiers, philosophers, technologists, and social theorists—asked themselves if it might be possible to start playing to win in the game of charity. And they realized that in this century, the answer might be yes.

So they rolled up their sleeves and started doing research. And they got results. And they realized that these results would only be useful if others knew about them and could follow them, so they started a movement.

That’s effective altruism—the idea that we should, with the power of science and global coordination, find the areas where we can have the greatest positive impact, and then direct our limited resources towards those areas. The opportunity to save lives, or to prevent suffering, or to contribute meaningfully to a bright and secure future for humanity—once reserved for the likes of Carnegie and Rockefeller—is now accessible to ordinary people like you and me.

Let’s start playing to win.

Further information: