## Saturday, January 29, 2011

### risk-aversion in charitable giving

My friend Jeff wrote that you shouldn't try to avoid risk in charitable giving, I wrote with some thoughts about why it's maybe conceivable that you should diversify charitable giving, and he replied. His reply outlines some reasons that you might think you should diversify, and addresses all of them.

Jeff's reasons that you might think you should diversify probably make more sense that what I had in mind, but I want to write some about what I did have in mind, since it's a little different. (I don't think I explained my thinking very well in that note!)

First, I want to suppose that there is decreasing marginal utility of charitable contributions. For any reasonably large organization, this won't be true at all on the scales at which us normal people could give. (But I'll address that concern later.) I think this assumption makes some sense-- suppose your organization is trying to feed hungry people. Then (I assume), it will start with the hungry people who are cheapest to make a difference for. Additional hungry people will be more expensive to feed, because you started with the cheapest ones. There could be other factors contributing to concavity, like the 'room for more funding' argument that Jeff mentions (but he seems right that this isn't very relevant right now).

On the other hand, it could be that this concavity assumption is completely wrong. Lots of things operate at efficiencies of scale. It might be that all of the relevant causes we would want to contribute to are more like this. (I will not address this concern later.)

If the concavity assumption came into play at our scale, that might be a reason for diversifying giving. But it doesn't.

Now let's also suppose individuals are uncertain as to the effectiveness of the various organizations there might contribute to. To simplify, I'll pretend that there are only four organizations to contribute to (A, B, C, and D), each with identical concave (but not very concave at small scales) 'utility' curves. Except that really only one of them is at all effective, so for all but one, the utility curve is actually flat at zero.

Pretend you have information (but not perfect information) about which one is effective. Maybe a little birdie told you "B! B! B is the only effective one!", but you know there's a 50% chance that the little birdie got confused and switched from the effective one to an ineffective one. So listening to the birdie gets you a 50% chance of choosing right, which is much better than chance.

In the absence of concavity, the utility-maximizing move is to donate only to charity B. At a society-wide scale where the errors are uncorrelated (the birdie's mistakes for each person are independent), the collectively optimal decision is for each person to donate to the charity that the birdie tells her to.

But what if the errors are correlated? Say the birdie's behavior (whether to make a mistake, what the mistake is) is the same for everyone. In that case, it's definitely suboptimal for everyone to listen to the bird and donate only to the organization the bird says is effective.

In realities, the errors aren't perfectly correlated, but they aren't uncorrelated either. And of course it's not the case that all charities are effective with some fixed utility curve, or completely ineffective. It's just meant to be an illustrative example. But given the example, it seems at least conceivable that this kind of reasoning could apply in real life, at least for some people.

That phrase for some people is important-- what this argument means for you (if it means anything for anyone! I admit that's not obvious) depends on what you can tell about the correlations between your information and everyone else's. If you find yourself donating to extremely unpopular charities, then this doesn't apply to you at all (but maybe a reverse sort of argument does apply to you--perhaps there's information in the choices others are making--maybe they're onto something?). If you're donating to the most popular charities, maybe it does apply to you. If the top 1000 charities are all of about equal size and carrying out very different activities, then surely this argument doesn't apply to anyone.

Also this depends not only on the correlations in the errors, but also what you think of the 'size' of your error. If you've thought hard about charitable giving and feel confident in your choice (or if you have such a friend, and are confident in his choice!), that's different from if you've never put much thought into it and are basing your decisions on what your neighbors or whoever are doing, when they haven't put much thought into it either.

I'm not sure if this makes sense-- but maybe? If anyone manages to read through the whole post (or even part...), I'd love hearing what you think!

1. "In that case, it's definitely suboptimal for everyone to listen to the bird and donate only to the organization the bird says is effective."

In your hypothetical, are there other ways to know which the best charity is? If not, and all you have to go on is a birdie broadcasting "B" that's only 50% likely to be right, you have no better evaluation criterion or strategy, so I think you still give to B.

2. Sorry, I guess I'm still being confusing, by not being concrete enough.

Say that the utility generated by each of charity A, B, C, D when donating $x is U($x) if the charity is effective, 0 otherwise (in this hypothetical, only one of them is effective.

As assumed above, say U is concave, for example:

U($50k) = 4 U($100k) = 6
U($200k) = 9, etc. Suppose there are four of us, each planning to give$50k, and the birdie tells us (collectively) that "B" has a 50% chance of being the effective one.

Plan 1: We all donate to B. [Expected utility: 4.5.]

Plan 2: We each donate to different ones, "covering all our bases". [Expected utility: 4.]

Plan 3: Two of us donate to B, one to C, and one to D. (Leaving A with nothing.) [Expected utility: 5.]

I guess it's more complicated if collaboration/communication isn't allowed, but I think you achieve something more like Plan 3 by including some randomness or allocation in the individual choices.

Key assumption for this "splitting" behavior to make sense: Concavity in the 'utility' functions, highly correlated errors in choosing the most effective charity, maybe some more.

I'm not saying that's how it is. But it seemed a little plausible, and then I wanted to be able to explain myself...

3. And yeah, that function is much more concave than is reasonable. And other unrealistic stuff about my example.

But I guess I'm trying to say that maybe something similar could go on at a larger scale.

4. Hi Dave, Have stumbled onto your blog by the occasional twists that happen in Life, but I enjoy reading your blog, the bits my non mathematical brain understands anyway.

As someone who does donate consistently here are some quick thoughts. Often, the need of the giver is more in control than the features of the receiver. For example: what is important to the giver - to feed the hungry, here or overseas; if here, in my immediate community where I can physically, rather than digitally, see my money at work, for a specific one off purpose e.g. natural tragedy, or long term need e.g. children in very deprived situation in local community based project.

As a donater do I need to have control over the money, eg set up foundation; or will I trust an accredited organization to control the money.

As a donater what do I see as an urgent need- food, clean water and human waste facilities, housing, medical needs, educating someone to tertiary level, a programme that has inbuilt exponential growth in outcome rather than 1 off effect,redressing an inequality.

Am I more at peace with myself if I contribute to assisting? How much do I need to give and do to achieve that peace?

Once these personal questions are thought through any donation has multiple effects. If maximum effectiveness per dollar is most important, any proven programme with results on the ground will be an efficient use of donations. iIf you have a % for admin limit in mind just check the annual results of \$ donated by use, and choose the organisation closest to your personal ideas of what you think is needed, and what your personal phylosophy/religious/spiritual belifs are.

A good initial rule of thumb is 10% of income regardless of income level.