Photo by Greg Rakozy

Even with a bigger picture viewpoint, longtermism is short-sighted in a cosmological context

Considered one of the most dangerous views in contemporaneity, longtermism might sound ambitious when you forget that humans are not the center of the world.

Lidia Zuin
7 min readSep 16, 2022

--

A while ago, the essay Understanding “longtermism”: Why this suddenly influential philosophy is so toxic brought up the concept of “longtermism”, a philosophy that invites people to think things in the long term, with a bigger picture, about the potential of humanity. The idea resonates not only in academia but also in the work of authors such as Nick Bostrom and researchers of the Future of Humanity Institute (FHI), as well as in the industry with people such as Elon Musk.

But what does longtermism actually mean? Years ago, when I met the writer Alexey Dodsworth, I learned about his thesis in philosophy about the ethics and metaphysics of transhumanism. He discusses the fact that Thomas Hobbes considered death, or more specifically violent death, the biggest of evils (summum malum). This concept was later updated by Hans Jonas when he proposed that the supreme evil is not in death itself, but in the extinction of a species.

When longtermism speaks of existential risks, it is not necessarily concerned with wars or even climate change. In their bigger picture viewpoint, these would just be small details that, in the long run, wouldn’t make so much difference. Despite the fact that world wars are terrible events that already caused the death of millions of civilians, in a longtermist perspective, this is not necessarily an existential risk for humanity since, after all, we are still here.

Émile Torres, author of the essay previously mentioned, is very didactic when explaining why longtermism is problematic. In the past Torres himself has flirted with the concept, even publishing a book about long-term future visions, but now he thinks longtermism is dangerous and controversial for its utilitarianism. In other words, from an utilitarian perspective, if we want to safeguard the human race, we should be investing in the protection of richer countries, as they are supposedly in a more “advanced stage” of development and, therefore, they can guarantee that further innovation can carry on. Like a Sophie’s choice, but in an existential scale.

More than unethical and racist, this is a conclusion that is quite precipitated. Torres mentioned the case of a brilliant economist who died in the 1930s, when he was only 26. His loss was felt not just on an individual level, but also due to the potential that such a bright mind reserved. That is, hypothetically, if this man didn’t die so young, he would still be able to contribute much more to society. However, living longer doesn’t necessarily mean consistency or even improvement, as this man could have made other decisions than pursuing his career in economy, for instance.

Namely, when “longtermism” speaks of the need to safeguard humanity so we can truly fulfill our potential, this is also very hypothetical or even a wishful thinking. Still, there are billions of dollars being invested in philanthropy, companies and governmental projects that share the same mindset. There are a lot of people and money being invested on this bet — something that I have previously discussed in an essay about billionaires going to space.

Additionally, on one hand, longtermism has its connections with other ideas such as accelerationism or even the concept of Roko’s basilisk. That is because we are talking about a self-fulfilling prophecy: it is believed that humanity will indeed reach a superior level of evolution in case technology is used to amplify our physical, biological and cognitive functions. However, for this to come true, we need to invest in related projects immediately.

It is at this point that we find an intersection between longtermism and transhumanism. Though longtermists try to avoid the latter term, due to its associations with eugenics in the United States, both aim to safeguard and evolve humanity by augmenting the species through technology and science, as well as making it fittest in a Darwinian way — but like discussed before, we are rather speaking of the survival of the richest, not really the fittest.

In his thesis, Alexey suggests that transhumanism could actually be a means to deal with Jonas’ concerns about the supreme evil. However, unlike longtermists, Alexey proposes not an utilitarian approach but rather cosmocentric view. Namely, we are not speaking about space colonization enterprises where humans (or posthumans) impose their values for their own survival, but rather a consideration and realization of other kinds of existence, even inanimate.

Alexey quotes Martyn J. Fogg when explaining what would be the ethical perspective of cosmocentrism:

“The Cosmos has its own values, they claim, and its mere existence gives it not only the right to exist, but the right to be preserved from any human intent. Such a moral principle we might call the Principle of the Sanctity of Existence, with uniqueness as its basis of intrinsic value. Moral behavior under such a system would involve nonviolation of the extraterrestrial environment and the preservation of its existence state.”

This is a kind of reflection that is approached by different sectors of transhumanism (considering that this philosophical movement has several branches). However, according to Torres, it seems that those who have the money and influence are not necessarily interested in guaranteeing harmony among all species, but utmostly human survival.

So utilitarian is this perspective that, for longtermists, some sacrifices are worthy if you think of it in the long run. It is literally the plot of any dystopia: though many might die, some will survive and enjoy a perfect society where there is no famine, death or pain. It doesn’t matter if you or I die in the process, as we are not part of the richest 1%, provided that this 1% survives and so humanity, as a species, is not extinct. It’s basically the end scene of the movie Don’t Look Up.

By the way, it’s interesting to note that in this Netflix movie the survivors land in a tropical forest, where they dwell naked among green forests, thus alluding to the imagery of the creational myth or what paradise would look like for those who are fit to enter it. On the other hand though, when we speak of survivalists or especially primitivists, there is a narrative about rejecting advanced technology so that we can return to a stage where we supposedly were “noble savages” — then again, another myth.

With that, I take a bigger leap by mentioning a talk that went viral on Twitter by the time of Brazil Game Show (BGS). The developer Mark Venturelli had his proposal of a talk on the future of games approved by the curators of the event, but, in fact, he used his time to discuss the problems and controversies of NFT, blockchain and all the mumbo jumbo that can come with the word “future”. In the case of blockchain, Venturelli questions the reason why we are trying to develop a technology that is so expensive (both in terms of processing power and energy consumption, then its carbon footprint) to offer a tool that automates authenticity procedures and trust among peers.

Even though society has itself developed based on conventions, there are of course ruptures and disobedience. The problem is that, for many people, we might have reached a point where it is impossible to trust each other. There is no commitment or a sense of honor being entrusted or even ritualized anymore. So, since it’s impossible to trust others, maybe we should create a technology that will do that for us, right? What Venturelli suggests though is that, even though this is a proposal that sounds very innovative, it’s actually a retrocess in social terms.

With that in mind, would we still think that longtermism is innovative and that we are indeed looking into the “evolution” of humanity through its lens? The thing is, in the west, we love extremes: either we wish to become machines and live in a simulation or we should go back to the caves (and here’s where transhumanist and anarchoprimitivist worlds collide). So the question that I ask is whether it is really desirable to aim for survival, even in small quantities, like we see in the case of animals in extinction?

It is no news that grandiose narratives about the “land”, “race” or “morality” may come with dangerous ideas. But the problem is that this is getting more and more sophisticated, to the point that it’s not so easy to identify such tendencies. Creating a human colony on Mars might seem amazing and a real demonstration of how advanced we are as a species, but what if these humans living on Mars are posed to live shorter and worse lives? What if they are the only humans still alive?

And what about the modification of ecosystems for human survival? Recent discussions about the “invasion” of foreign plants in Icelandic deserts raised the question of the interference of the environment — something particularly sensible to countries that went through colonization.

In the field of astrobiology, for example, there is a branch that is concerned with the ethics of space exploration and how we should deal with other species and existences found outside of Earth, so that we don’t make the same mistakes we did among ourselves. This all to say that longtermism is not only dangerous because it disregards other life forms in face of human survival, but because its grandiosity is still short-sighted when we consider that humans are just a very tiny part of a whole universe of life (animated and inanimate) and how all of them are intertwined.

By stressing the importance of survival of a particular species, we ignore the fact that life is no singular concept, but rather an expanded one. We forget that a good chunk of our body is populated by bacteria, that the pollination of plants is partially done by birds and insects, that forests would be swallowed by dead trees if fungi didn’t exist to decompose that material, and so forth. Therefore, if we want to avoid the summum malum of extinction, as proposed by Jonas, we shall not fall into the temptation of thinking that human life is more important than any other and thus adopt a more cosmocentric approach rather than a longtermist.

--

--

Lidia Zuin
Lidia Zuin

Written by Lidia Zuin

Brazilian journalist, MA in Semiotics and PhD in Visual Arts. Researcher and essayist. Technical and science fiction writer.

No responses yet