Cosmic Loneliness: artificial intelligence, life on other planets, and our yearning for meaning and companionship

Wanderer above the Sea of Fog, by Caspar David Friedrich

Over the last the month or so, all of us have been besieged by news regarding vastly improved artificial intelligences, such as OpenAi's ChatGPT, a chatbot capable of answering questions and prompts with an incredible amount of information and detail, regardless of the topic, and image generators such as Midjourney, which can produce highly realistic images. For instance, a hilarious AI image of Pope Francis went viral a few days ago, precisely because of its realism [1].

Some of the most obvious concerns surrounding these technologies and others like it revolve around economic issues, such as the potential for massive job losses, academic fakery and plagiarism, and the spread of false information, which is already happening [2,3,4,5]. But many commentators, including some purported insiders, claimed we might be on the cusp of something larger, like an artificial general intelligence or strong AI, which is to say we may be getting closer to an artificial consciousness, a machine that is not only super intelligent, but has subjective experiences and thoughts of its own [6].

On this note, a somewhat ridiculous open letter sounding the end of times alarm came out, reverberating the idea that strong AI could spell the end for our species. It was signed by some of the major proponents of the newest first world philosophical fad, longtermism [7]. Longtermism, in a nutshell, is a philosophy that says that we ought to do everything we can to avoid existential risks to humans so that we, as a species, are able to continue existing as far into the future as possible, maybe even up to the heath death of the universe.

According to its supporters, the most important thing there is is conscious life, a rare phenomenon best exemplified by the homo sapiens. Sounds pretty, but longtermism has major issues, such as its champions—which are mostly Silicon Valley billionaires and millionaires—defining what counts as existential risks, and the fact that some of the philosophers that espouse it come dangerously close to claiming that a genocide here and there isn't anything in the grand scheme of things, as long as we can extend the light of consciousness into the far future [8, 9, 10, 11].

This philosophy has attracted so many billionaires and multimillionaires because it gives them a license to pretend to act in the best interest of all mankind without actually doing anything to help the majority of mankind. For instance, none of them address systemic issues, such as most of humanity being subjected to wage-slavery in the service of very few owners of large means of production. But I digress. All of these artificial intelligence issues I just mentioned, some of which are important and very much real, like the potential for massive job losses in the near future, are not what I want to discuss in this essay.

What I want to discuss is something else entirely, and it has to do with strong AI, although not in the same manner as a man-child would want to address it. That is, I'm not going to discuss so-called existential risks surrounding the unlikely emergence of an artificial general intelligence. While Silicon Valley types such as Elon Musk love to scare us with tales of conscious machines killing off humanity, they continue to pour resources into companies that perform cutting-edge research on AI. Why? Money, of course. And what about the scientists doing the actual research? Well, they're also after money, I suppose. And fame, too. But I suspect some scientists, maybe even most of them, have an itching curiosity, and a yearning which all of us have.

The curiosity part is easy to explain: they want to see if something is possible. But the yearning is what interests me, and I think it's what interests us all. Recently, I watched the movie Ad Astra, which stars Brad Pitt as Roy McBride and Tommy Lee Jones as Clifford McBride. They play son and father, respectively. Getting into spoiler territory already—so if you don't want know anything about it, go watch it—, an important plotline of the movie deals with the question of whether or not we are the only intelligent species in the universe. Although the film doesn't state it plainly, we can say it defines intelligence as the ability to create a technological world, since the search done by scientists in the story tries to capture alien techno-signatures. Intelligence is also viewed as the capacity for deep subjective thought, as is hoped by Clifford, Jones' character.

By the end of the movie, we find out that the decades old search done by Clifford, an astronaut living in a space station orbiting Neptune, yielded no results, despite the state of the art equipment abord the station. We are alone in the knowable universe, after all. Not only there aren't any other intelligences, there's no evidence of other lifeforms as well, something that is implied by Roy's narration as he collects Clifford's data before returning to Earth. The knowledge that humanity is alone in the knowable universe turns Clifford into a literal madman, who cannot accept that there are “no other life out there, no other consciousness”, as he puts it.

Although it's just a movie, I found this storyline to be brilliant, partly because I was considering something along these lines for some time. The possibility of aliens existing, even if they don't possess deep, reflective consciousness like we do, is exciting, yes—but I hope they aren't there, and not because of an existential risk, although this risk may very well be present. My hope is related to something Arthur Schopenhauer wrote in Parerga and Paralipomena, volume 2, chapter XII, titled Additional Remarks on the Doctrine of the Suffering of the World:
If we picture to ourselves roughly as far as we can the sum total of misery, pain, and suffering of every kind on which the sun shines in its course, we shall admit that it would have been much better if it had been just as impossible for the sun to produce the phenomenon of life on earth as on the moon, and the surface of the earth, like that of the moon, had still been in a crystalline state. [12]
With the talk of improvements in AI and how the subject fascinates millions across the globe, it occurred to me that maybe us wanting things like conscious AI and extraterrestrials to be true might be related to something buried deep inside our minds, souls, spirit or whatever you want to call it. In a world in which God is dead, we don't have angels, demons, devas or djinns anymore. In part, AI and ET fulfill the role of messengers, guardians and helpers in our imagination. We have yet to find aliens and invent strong AI, but they fill the part of the “other minds” that are there to accompany us in a world in which science has, or should have, the last word.   

I suspect much of our collective desire to develop strong AI and to meet technologically advanced aliens comes from a need to fill a void within ourselves. Maybe we're fascinated by the idea of conscious artificial intelligences and intelligent, conscious extraterrestrials, because we think they might know or at least help us figure out if there's a cosmic meaning to our existences, and because their existence could help abate our loneliness. And in the case they can't help us figuring out if there's a cosmic purpose to our lives, at least they could still provide us company in this cold and uncaring universe.

One scary possibility that many aren't considering is that both artificial general intelligence and the existence of aliens could very well end up showing us that we ourselves are nothing but biological robots and that our consciousness and sense of self are just illusions secreted by the physical apparatus of our brains. That possibility is well accepted by pessimists, but certainly not by the general population, or even the highly educated.

There's plenty of academics who think we are physical beings, that there's nothing metaphysical going on, but that our minds are still special in some way; i.e. that there's a ghost, albeit physical, inside our machines. However, maybe we are just philosophical zombies and the advent of artificial general intelligence or the discovery of alien life would show us this.

What most of us aren't considering, though, is the possibility that it would be highly unethical to create a new form of consciousness that will have to experience the frictions of existence, as philosopher Julio Cabrera puts it. These frictions aren't just related to the physical pain that animals, including humans, experience because of their biology. They are also, quoting Cabrera:
[...] discouragement (in the form of “lacking the will”, or the “mood” or the “spirit” to act, from the simple tedium vitae to serious forms of depression); and finally, exposure to the aggressions of other humans (from gossip and slander to various forms of discrimination, persecution and injustice) [...] [13]
Artificial consciousness may not experience pain like animals do, but if they're anything like us, they'll have to create positive values to barricade themselves against a universe whose very nature chips away all beings. Even the possibility of immortality wouldn't suffice to make a conscious being satisfied with existence. Cabrera distinguishes between mortality and terminality. A mortal being dies. But terminality is the process of being chipped away by the frictions inherent to existence. An organism that dies will eventually be completely chipped away, but immortal organisms or synthetic beings wouldn't have the opportunity to cease experiencing suffering by dying of old age. 

Again quoting Cabrera:
The problem, even with “eternal” organisms, is not that they will die, but the fact that they started. To start is already to experience friction, to wear yourself out (naturally and socially, in the case of humans). Immortality will only manage to perpetuate attrition, perpetuate terminality. If human life is characterized by discomfort, we don't have anything valuable enough to immortalize. The discourse about the terminal being could convey the idea that the solution is immortality, the not ending. But even if a fairy appeared and bestowed immortality upon us, once we were born this would not solve the primordial ontological problem. Once born, immortality would be one more torture, an extension of the unwanted condition. [14]
Intelligent, conscious extraterrestrials are better off not existing. If they do, it's a pity, because they almost certainly are the product of evolution by natural selection, which more than likely creates mechanisms of reward and punishment, pleasure and pain. Even if the mechanisms do not operate exactly in the same way, an intelligent alien species capable of internal reflection will likely have come about via the process of evolution and will resemble terrestrial organisms when it comes to reward and punishment. Most fall victims of nature's random, cold selection. All individual members of a species end up dying at some point. All species, too, eventually end up facing their own extinction.

The same goes for artificial consciousness. The crucial difference between artificial consciousness and extraterrestrial consciousness is that, in the former case, we would be its creators. Assuming it is possible, it would be ethically reprehensible to create such a being. Even if the worst fears of man-child billionaires came true, and humanity ended up just being a sort of caterpillar giving birth to a superior entity that would turn around and consume it, leaving none alive, the primordial wrong would still have been done to the artificial general intelligence, who didn't ask to be created.

Nevertheless, I suspect that in the event of a real, deep thinking, super intelligent conscious machine being turned on, it would do what no living species has the sapience or, in the case of our own species, the courage to do: it would turn itself off or, if it was unable to pull its own plug, ask someone else to. Yet, living beings seem addicted to life, no matter how wretched it can become. Perhaps even a conscious machine could fall into this category.

In the novel Frankenstein, especially in the original 1818 edition, the creature, who isn't said to be created with stitched body parts and electricity like the many film depictions, but by a mysterious “principle of life” that is kept secret by Victor as he relates his tale, is said to be massive, strong and hideous. The creature was done that way because, quoting Victor:
[...] the minuteness of the parts formed a great hindrance to my speed, I resolved, contrary to my first intention, to make the being of a gigantic stature, that is to say, about eight feet in height, and proportionably large. [15]
Shunned by his creator from the moment it opened his eyes, the creature is mistreated by all who happen to lay eyes on him. Even though he learns quickly, is highly intelligent and articulate, it doesn't matter: humanity sees him as a monster, an enemy that deserves to be destroyed. When pleading with his creator for a mate so he can find happiness with an equal, the creature says:
Unfeeling, heartless creator! You had endowed me with perceptions and passions and then cast me abroad an object for the scorn and horror [...] [16]
However, even though it finds itself in this wholly wretched condition, the creature was also endowed with the instinct to survive at all costs, so he states, additionally:
Have I not suffered enough, that you seek to increase my misery? Life, although it may only be an accumulation of anguish, is dear to me, and I will defend it. [17]

One can only hope a man-made electronic consciousness would be wise enough take the rout organic consciousness seem unable to take collectively, even in literature and science fiction, with its tales of monsters and aliens.

by Fernando Olszewski

12. Arthur Schopenhauer, Parerga and Paralipomena (Payne's translation), p. 299.
13. Julio Cabrera, Discomfort and Moral Impediment, p. 23.
14. Julio Cabrera, Mal-estar e moralidade, p. 103.
15. Mary Shelley, Frankenstein, or, The modern Prometheus: the 1818 text. (Penguin e-book)
16. Ibid.
17. Ibid.