Effective Altruism — A Critical Assessment
Framework note — evaluating EA’s analytical tools and structural blind spots, with implications for Wellspring’s theory of change
What It Is
Effective altruism (EA) is a philosophy and movement that uses evidence and reason to identify the most effective ways to help others. Formalized at Oxford in the early 2010s by philosophers like William MacAskill and Toby Ord, building on Peter Singer’s utilitarian ethics. The core proposition: some ways of doing good are orders of magnitude more effective than others, so we should figure out which ones and do those.
The movement evaluates causes through three lenses: importance (how many people affected, how deeply), tractability (how solvable with additional resources), and neglectedness (how few resources are currently devoted to it). The most promising causes score high on all three.
At its best, this is hard to argue with. GiveWell’s research has directed over 5,500 per life saved. That’s real. The rigor impulse is sound.
What’s Worth Borrowing
The importance/tractability/neglectedness framework is genuinely useful for cause prioritization. When Wellspring evaluates where to focus energy — site selection, financing structure, governance design, community programming — asking “is this important, tractable, and neglected?” is a good filter.
The “scout mindset” principle — seeking truth rather than defending existing beliefs — is just good epistemics. We’ve already committed to not cherry-picking sources that agree with us.
The insistence on measuring outcomes rather than intentions is healthy. Good intentions don’t house anyone.
The Structural Blind Spot
EA’s most significant weakness, and the one most relevant to Wellspring, is its inability to engage with systems. It evaluates interventions, not architectures. It can tell you bednets are cost-effective. It cannot tell you why 700 million people can’t afford bednets, or what to do about that.
Critics across the political spectrum have identified this gap. The Jacobin critique: EA focuses on individuals providing necessities but has nothing to say about the system that determines how necessities are produced and distributed. The philosopher Amia Srinivasan noted the movement’s lack of engagement with global inequality and oppression as structural phenomena, observing that EA has been predominantly middle-class white men fighting poverty through conventional means.
This isn’t a minor oversight — it’s a fundamental limitation of the framework. EA accepts the existing system and tries to optimize charity within it. It has no theory of structural change. This makes it philosophically opposed to what Wellspring is building, even though individual EAs might support affordable housing as a cause area.
The Charity Mutual Fund Problem
Taken to its logical conclusion, EA produces something like GiveWell — a charity mutual fund with analysts optimizing your donation portfolio. EA proponents would say “yes, and that’s good, because your feelings about giving are a terrible allocator of resources.”
But this solves for the wrong variable. It optimizes for impact-per-dollar while ignoring that the act of giving — the human relationship between giver and recipient, the participation in community, the dignity of mutual aid versus technocratic resource allocation — is itself part of what makes charity work. Not just emotionally, but structurally.
Mutual Aid says “we help each other because we’re in this together.” EA says “I calculated that your suffering scores higher on my utility function, so you get the bednet.” Both deliver the bednet. Only one builds the social fabric that prevents the next crisis.
This is the tension with Lift Where You Stand: EA asks “what should you do to maximize global utility?” Lift Where You Stand asks “what’s preventing you from contributing what you already have?” EA is allocative and top-down. Lift Where You Stand is emergent and trust-based. EA treats people as resource nodes in a global optimization problem. Lift Where You Stand treats people as capable adults with surplus to share.
The Robin Hood Problem
Texas’s “Robin Hood” school funding plan (Chapter 41, Texas Education Code) illustrates EA-style reasoning applied to policy. The logic was clean: property tax revenue varies wildly by district, so redistribute from “property-rich” to “property-poor” districts to equalize per-pupil funding.
But “property-rich” didn’t mean “wealthy community.” In rural Texas, it often meant oil and ranch land generating high tax revenue for districts with 200 students. The per-pupil funding looked enormous because the denominator was tiny, not because spending was extravagant. When the state recaptured that surplus, it gutted districts already running lean — districts where bus routes span fifty miles and there are no economies of scale.
Meanwhile, the receiving urban districts had problems money alone couldn’t solve, and the redistributed amounts barely moved the per-pupil needle in districts with tens of thousands of students.
The framework evaluated the intervention (redistribute funding) against a metric (per-pupil equity) without asking the structural question — why does school funding depend on property taxes in the first place? It couldn’t account for context, because context doesn’t fit in a cost-effectiveness ratio. The oil land community had something and was using it where it stood. Robin Hood said “our global optimization says that dollar does more good elsewhere” — which is precisely what EA says about charitable giving.
The Longtermism Drift
A significant strand of EA argues that because future generations could number in the trillions, preventing existential risk (AI misalignment, pandemics, asteroid impacts) dwarfs everything else by the math. This is how you get from “buy malaria nets” to “fund AI safety labs and Mars colonization” — a trajectory that conveniently redirects resources toward the tech industry’s own priorities.
The longtermist wing is where EA’s structural blind spot becomes most dangerous. It produces reasoning like: earning money through exploitative means is justified if you donate it effectively enough (the “earn to give” philosophy). This is the logic that made Sam Bankman-Fried the movement’s poster child — and the logic that couldn’t see the problem with a professional ethicist attaching his movement to an unregulated cryptocurrency exchange.
Where This Leaves Us
EA’s analytical toolkit has value. Its theory of change does not. Wellspring isn’t trying to optimize the charity-to-poor-people pipeline — it’s trying to restructure ownership so the pipeline isn’t needed.
The relevant comparison frameworks are Mutual Aid (reciprocity over resource allocation), Anarchism as Political Theory (structural critique over symptom management), and Capitalism vs Free Trade (the ownership question EA refuses to ask). EA can tell you which housing charity to donate to. It cannot tell you that the housing system itself is the problem, or that community land trusts are a structural alternative to the cycle of extraction that creates the need for charity.