The Ethics of Giving in the Very Long Term
Glenn weighs in on effective altruism and longtermism
Today we’ve got the second installment of our new feature in which the TGS team presents me with a story, idea, video, or topic and documents my spontaneous reaction to it. In this episode, Mark Sussman, the editor of this newsletter, brings to my attention a Twitter thread by Émile P. Torres about “effective altruism” and its controversial offshoot, “longtermism.” Mark presents a summary of both of these ideas, but you may have read about them in the news over the past week or so. (A good starting point is Gideon Lewis-Kraus’s profile of effective altruism’s most influential proponent, the philosopher William MacAskill.) That’s because one of the major figures in longermist circles, the (perhaps former) cryptocurrency billionaire Sam Bankman-Fried, has been at the center of a scandal that resulted in the bankruptcy of his cryptocurrency exchange, FTX.
We recorded this episode weeks before the implosion of FTX, but Mark refers to Bankman-Fried’s attempt to parlay his wealth into political influence, and both he and I evince skepticism about the prospects of Bankman-Fried maintaining his purported ethical position while dealing in vast sums of money. That’s not all we talk about, though. Neither Mark nor Nikita nor I are experts on the topic, and you’ll hear us thinking through some of the possible implications of effective altruism and longtermism in real time. What, if anything, do we owe humans living in the distant future? Is effective altruism just a way for Silicon Valley elites to salve their guilty consciences? Can “presentism” get you canceled?
This post was previously available only to subscribers, but I’ve taken it out from behind the paywall. However, I encourage everyone to subscribe. Besides features likes this one, subscriber benefits include early access to the ad-free version of the weekly podcast episode, the opportunity to ask questions for the exclusive monthly Q&A episode John McWhorter and I record, ticket pre-sales for live events, and full access to the archives and all new content.
The work we do here at the newsletter and podcast would be impossible without your generous support, so many thanks to those of you who currently subscribe. If you’re not yet a subscriber, please consider becoming one.
I am a professional forecaster and econometrician. One thing I know for sure is you cannot predict more that a few years out at best (think of how Covid changed everything). Sacrificing the current for the future is a fools game. We don't know what the future holds.
Don't have time for podcasts, but the subject matter interests me. From one-a the teachers of MacAskill, Arif Ahmed: "William MacAskill’s ineffective altruism" on UnHerd. https://unherd.com/2022/11/how-effective-is-william-macaskills-altruism/
From the article, this is a quote from MacAskill:
> “The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us… Should I care whether it’s a week, or a decade or a century from now? No. Harm is harm, whenever it occurs.”
I dunno what it is about smart people. It seems to cause them to have a pea-brain. From the author of the UnHerd article:
> "The second point is that it’s hardly obvious, *even from a long-term perspective,* that we should care more about our descendants in 25000AD — not at the expense of our contemporaries."
> “It’s worth spending five minutes to decide where to spend two hours at dinner; it’s worth spending months to choose a profession for the rest of one’s life. But civilization might last millions, billions, or even trillions of years. It would therefore be worth spending many centuries to ensure that we’ve really figured things out before we take irreversible actions like locking in values or spreading across the stars."
So, given these statements, MacAskill is an idiot. His approach *might* make some sense, if we could judge what the future is gonna be. Then we *might* be able to tell [Edit: "who" -> "how] our actions will impact the future, and what the best actions would be. Anybody who thinks they have a clue about the future a century from now is an idiot. I rest my case.