My friend Jeff wrote that you shouldn't try to avoid risk in charitable giving, I wrote with some thoughts about why it's maybe conceivable that you should diversify charitable giving, and he replied. His reply outlines some reasons that you might think you should diversify, and addresses all of them.
Jeff's reasons that you might think you should diversify probably make more sense that what I had in mind, but I want to write some about what I did have in mind, since it's a little different. (I don't think I explained my thinking very well in that note!)
First, I want to suppose that there is decreasing marginal utility of charitable contributions. For any reasonably large organization, this won't be true at all on the scales at which us normal people could give. (But I'll address that concern later.) I think this assumption makes some sense-- suppose your organization is trying to feed hungry people. Then (I assume), it will start with the hungry people who are cheapest to make a difference for. Additional hungry people will be more expensive to feed, because you started with the cheapest ones. There could be other factors contributing to concavity, like the 'room for more funding' argument that Jeff mentions (but he seems right that this isn't very relevant right now).
On the other hand, it could be that this concavity assumption is completely wrong. Lots of things operate at efficiencies of scale. It might be that all of the relevant causes we would want to contribute to are more like this. (I will not address this concern later.)
If the concavity assumption came into play at our scale, that might be a reason for diversifying giving. But it doesn't.
Now let's also suppose individuals are uncertain as to the effectiveness of the various organizations there might contribute to. To simplify, I'll pretend that there are only four organizations to contribute to (A, B, C, and D), each with identical concave (but not very concave at small scales) 'utility' curves. Except that really only one of them is at all effective, so for all but one, the utility curve is actually flat at zero.
Pretend you have information (but not perfect information) about which one is effective. Maybe a little birdie told you "B! B! B is the only effective one!", but you know there's a 50% chance that the little birdie got confused and switched from the effective one to an ineffective one. So listening to the birdie gets you a 50% chance of choosing right, which is much better than chance.
In the absence of concavity, the utility-maximizing move is to donate only to charity B. At a society-wide scale where the errors are uncorrelated (the birdie's mistakes for each person are independent), the collectively optimal decision is for each person to donate to the charity that the birdie tells her to.
But what if the errors are correlated? Say the birdie's behavior (whether to make a mistake, what the mistake is) is the same for everyone. In that case, it's definitely suboptimal for everyone to listen to the bird and donate only to the organization the bird says is effective.
In realities, the errors aren't perfectly correlated, but they aren't uncorrelated either. And of course it's not the case that all charities are effective with some fixed utility curve, or completely ineffective. It's just meant to be an illustrative example. But given the example, it seems at least conceivable that this kind of reasoning could apply in real life, at least for some people.
That phrase for some people is important-- what this argument means for you (if it means anything for anyone! I admit that's not obvious) depends on what you can tell about the correlations between your information and everyone else's. If you find yourself donating to extremely unpopular charities, then this doesn't apply to you at all (but maybe a reverse sort of argument does apply to you--perhaps there's information in the choices others are making--maybe they're onto something?). If you're donating to the most popular charities, maybe it does apply to you. If the top 1000 charities are all of about equal size and carrying out very different activities, then surely this argument doesn't apply to anyone.
Also this depends not only on the correlations in the errors, but also what you think of the 'size' of your error. If you've thought hard about charitable giving and feel confident in your choice (or if you have such a friend, and are confident in his choice!), that's different from if you've never put much thought into it and are basing your decisions on what your neighbors or whoever are doing, when they haven't put much thought into it either.
I'm not sure if this makes sense-- but maybe? If anyone manages to read through the whole post (or even part...), I'd love hearing what you think!