Extinctionists x Longtermists

Rings of Saturn, by Bruce Pennington

For the last few years I have followed the work of American philosopher, Émile P. Torres, and even had the privilege of having him follow me back on Twitter, now X, a while back. In some of my previous essays, I have linked or referenced multiple articles written by him for different online magazines—articles in which Torres rightly and courageously criticizes nefarious Silicon Valley billionaires' ideologies such as “longtermism” and “effective altruism”, which may sound nice, but are extremely problematic.

In a nutshell, longtermism is a philosophical view that considers the extension of consciousness into the far future of the universe as the ultimate good, be it human consciousness or some type of consciousness engineered by us and that resembles ours in some important way, specifically a type of consciousness that is able to feel pleasurable experiences. If current scientific estimates are proven to be correct, and the universe will last for many trillions of years before reaching heat death, then humanity's ultimate goal, according to longtermists, should be to colonize the future of the universe with as much consciousness as possible.

Longtermism, a philosophy associated with Swedish philosopher, Nick Bostrom, is connected to the effective altruism movement, which has as its main figure British philosopher, Will MacAskill. Effective altruism was born out of MacAskill's own interpretation of Peter Singer's utilitarian view that people ought to donate way more to the poor than they actually do. From this, MacAskill end up working out a moral framework in which it was morally desirable for people to go into finance and make as much money as possible, because by doing that they will have more to donate in the future than they do now.

Growing from this perspective, longtermists argue that certain serious problems facing entire populations aren't worth the trouble wasting resources on, because they aren't really existential risks to human consciousness surviving and expanding into the future.

It isn't hard to understand how come effective altruism and longtermism have long since been a favorite among billionaires, especially tech industry billionaires and multimillionaires. They can hoard resources and feel good about it, believing they are at the forefront of extending human consciousness into the far future. In their minds, fixing structural societal problems, such as extreme income inequality, isn't worth the trouble, since they're doing “God's work” by investing in new technologies that will help spread humanity throughout the cosmos.

What are millions of human beings dying in poor dysfunctional nations, or even in wealthy nations, compared to the trillions upon trillions of future minds living on Mars, colonizing the Milky Way galaxy, and even reaching other galaxies, in the next few million years? That's how they think. And Émile Torres has been right calling them out on how dangerous they are.

However, I do not agree with Torres when it comes to something else.

He rightly points out that these longtermists types are very much in bed with supremacists who want to wipe out certain populations, essentially exterminating most of humanity, and that some of them embrace the notion that we should build a godlike Artificial General Intelligence, or AGI, which should replace us in the expansion to the stars. Others fall somewhat in the middle, believing some of us, but not all of us, will be able to digitize their minds and become one with machines, essentially overcoming our puny organic states, achieving some sort of technological singularity, enabling them to live longer and have easier access to the stars.

Because of fanciful and potentially harmful beliefs such as these, Torres calls them all “extinctionists”. Here's where I cannot agree with him.

Recently, on X, he has written that we should oppose pessimists, antinatalists, misanthropist, as well as longtermists, transhumanists, and singulitarians. To his credit, he does separate the two groups, taking good care not to group them together—although I would also argue that Schopenhauerian-like philosophical pessimism, from which antinatalism can come as a consequence, and which I agree with, shouldn't be automatically grouped together with misanthropy. It's certainly not out of a lack of sympathy or compassion for my fellow humans, and other sentient creatures, that I and many other pessimists agree with Schopenhauer that it would have been better if the Earth was as sterile as the Moon.

The label of extinctionist is one that fits much better with pessimists and antinatalists. In fact, responding to a post on X written by Elon Musk, in which he stated that the real battle wasn't left versus right, but extinctionists versus humanists, I gladly wrote: “I guess I'm an extinctionist?” Other times, Musk, who has publicly embraced longtermism, stated that the real struggle was between extinctionists and expansionists. Sure, I agree. And I'm an extinctionist, since I agree with Schopenhauer, Cioran, Zapffe, Cabrera and Benatar that it would have been better if none of us ever existed.

In essence, Torres' views aren't necessarily at odds with the idea that human consciousness, or sentience in general, could or even should extend into the far future. They are at odds with how this should come about, who should be in the pilot's seat, so to speak. And I agree with him: if we MUST go on, then some of the worst types of people we would want piloting the ship are people like Elon Musk or Peter Thiel, people inspired by Bostrom, MacAskill and others thinkers not too far from them, but who are more openly horrible, such as Curtis Yarvin and Nick Land.

If we MUST go on, then let us instead fix the fixable issues we have in our world. Let us stop turning a blind eye to human suffering amplified by greed and hate, suffering that is regarded as necessary by soulless and often wrong market technocrats and realpolitik pushers. But even if and when we fix all of this, it will still be better never to have been, at least in my view. It's important to bring Philip Mainländer here. Mainländer was both a grim philosophical pessimist and a socialist. He not only saw no problem espousing both views, but argued that achieving a more prosperous and egalitarian humanity was a necessary step towards complete extinction, which, according to him, was the ultimate goal of the entire physical universe.

However, one need not go deep into Mainländer's will-to-death metaphysics in order to defend the view that, yes, there are definitely better ways to steer humanity's collective boat, while also understanding that it is all for naught, and that we should freely abstain from creating new sufferers, peacefully ending this amazing yet sad and uncalled for chapter in the history of the universe.


by Fernando Olszewski