Español

Building the new order

Written by Pablo González and Pedro Nonay, trying to know how the new world will be.

Entry 2

Artificial intelligence – Singularity 

May 8, 2023



My new context selection.

Recent news I have selected to think about contextual changes are:

*****

AI and Singularity.

In the previous entries I already stated the issue of the Singularity, which is the moment when total changes will occur in society due to artificial intelligence and other technological advances.

I say below that I think this issue is not the one that will define the world we go to in the next few years (which is my goal for this serie of entries), but the one that will come after.

However, I want to address it first to make it clear that the world of the next few years is going to be a time of transition. That we have to make long-term decisions with the Singularity in mind. 

Going into the subject, we can reach this Singularity by several different paths. The first one I want to discuss is that of artificial intelligence (AI), which is so fashionable today, and which I think is the most likely today.

Artificial Intelligence. The concept.

Let’s imagine that we manage to program a computer (or rather a network of computers, which we will call AI) to have access to all the world’s knowledge, to all that is written, and that it is capable of searching in seconds for the relevant knowledge for what it has to analyze at any given moment. And to do so in any language. I clarify that this is almost achieved.

Let’s imagine that we give this AI some rules to understand this knowledge, and to apply it in the search for solutions to what is posed at any given time. And we also give it rules to differentiate between fake publications, or simply basic ones, and professional and reliable ones. We even give it rules so that it can ask herself the right questions at any given moment, … and to do theoretical research on it owns.

Let us also imagine that we give it a kind of “Constitution”, i.e., some general desired objectives to be prioritized in deciding any matter of detail. For example, the three rules of robotics proposed by Asimov a long time ago (basically: protect humans), although, unfortunately, there could be others.

If we achieve this (and we are very close), even the most intelligent person would not have as much information in his head as this AI, nor would he be able to quickly apply all these rules of action to the specific case. Much less could it make a very large tree of resulting alternatives to choose among the best for the final objective. Nor could such a person be an expert in all subjects at the same time, and consider all the implications. As an illustrative example, it should be remembered that, on a smaller scale, it has long been clear that the best chess player in the world cannot beat the computer.

In that case, the sensible thing to do would be to let the AI make all human decisions, because it would do it better than us. Of course, it would be sensible if it is in the AI’s constitution to protect us, because if it has other objectives, it may be the case that its decision is to annihilate humans.

If we give that AI the governance of robots, and the ability to design and manufacture them to do the mechanical jobs (from surgeries, to cultivating the land, to building houses, …), the result is that humans would not have to do the intellectual jobs, nor the manual ones. The AI would do everything, including deciding on the most efficient form of our government and our coexistence. 

The fact is that this, although it sounds crazy and scary, is about to happen. It will not happen this year, and it will be gradual (by branches of activity), but it will come. Maybe in a few decades.

When that time comes, the very concepts of the individual, of society, of politics, of private property, … will change completely. 

We will be simple living organisms forced to do what the AI has decided is best for us, and it will give us food, housing, … Of course, we will still have the ability to feel emotions, such as love, fun, …

In reality, it won’t be much different from the way things were in ancient times. When the role of AI was filled by kings and religious leaders. In those times, normal people were given everything decided, including what they had to believe in and the right way to act. And, bridging the gap, it is true that in those times kings and religious leaders had access to all the knowledge of the time, and to hire the best “decision makers”.

From what was said in the previous paragraph, AI is nothing new. What is new is that it is done by a computer, rather than a person. The name is also new. Perhaps we will understand it better if, instead of AI, we call it Supreme Intelligence, or better: Supreme Power, the one who knows everything and decides correctly for us. That is, what religion has always been.

IA. There will be more than one.

Faced with the scenario I have just described, many may say that this is unacceptable, and that we must stop it, or find time to control it. Something similar has been said by relevant people (Elon Musk included), who have signed a letter asking for downtime (news here).

But that is impossible. In the beginning, there won’t be just one AI, but as many as developers and funders want. And, there are not millions of people with the know-how to develop it, but hundreds (and more to come). 

Perhaps a government can delay the development of a public AI in its country. It might even prohibit private AIs from being developed in its territory (it remains to be seen how to prevent them from being developed clandestinely, which would be worse, because it would be like giving the design power to the mafias). What is not going to happen is that all the countries of the world coordinate in this prohibition effort (it would be the first time).

If these prohibitions are applied, the result will be that the AI of that marginal or enemy country, or that of the mafioso, will advance first. And, as in almost everything, the one who arrives first has the advantage. Do we want to be late in the race? 

I think the best thing to do is to encourage development as early and as well as possible. Even stealing the best brains in the business from our enemies. Well, no different than how the development of the also very dangerous nuclear bomb was done. Let’s remember that the USA put the maximum effort, and stole Hitler’s technicians (who were more advanced).

When several AIs are developed, it will be necessary to see how the coexistence between them works. They may decide to leave a physical space for each one to control, or to look for a form of coexistence and cooperation among them, or to do something similar to a merger, … or, even, to go to war among them. I remember here the comparison with religions throughout history.

Perhaps, because it can’t be stopped, and it’s not convenient! that’s why Elon Musk himself, after signing the letter I said above, last April has created a new company aimed at making his own AI (news here).

Humans in the face of AI.

We have to forget about stopping it. Rather, we have to speed it up before “the bad guys” do it.

We only have the possibility of drafting as well as possible the “Constitution” that governs this AI, which will be a task for philosophers. That leads me to see that I don’t think USA and China will agree on the same Constitution (to cite just two very relevant players). Which again leads to the high possibility of multiple competing/coexisting AIs. 

In fact, one advantage we humans have, in drafting such a Constitution, is to put the various AIs in competition to care about the acceptance of more humans (by giving them more happiness). This brings me again to think about the comparison with religions. 

It would be like the existence of several countries, each with its own AI. And allowing migrations. Thus, different humans could choose different types of life. Of course, with the Constitution, it would be necessary to prevent AIs from deciding to go to war with each other.

As humans, we can also try not to lose access to an emergency shutdown button, in case we don’t like what the AI is doing. It would be something like in the movie “2001, a space odyssey”, when it is decided to shut down Hal 9000, which was the closest thing to an AI for that time.

The other alternative we humans could have, always if the AI Constitution allows it, is to live outside the control of the AI, like the explorers of the Wild West, living without the protection (or oppression) of a Law. If, on the contrary, the Constitution does not allow it, it could happen like with Hal 9000 in the movie: Hal would decide that those dissidents are “wrong machines”, and would choose to kill them, that is, it would be like living underground. Like the resistance in WWII.

For most of today’s humans, who do not participate in the design of AI, our almost only option is to be as informed as possible, adapt as best we can to the gradual advances that will be made, and … when the final breakthrough comes, choose the AI we trust, or choose to move to resistance.

Having said that, and going back to my speech in previous entries about facing a world divided in the blocks of China and USA, I think that the Chinese society is better prepared to follow the instructions of a new supreme leader (the AI). We are more individualistic.

To finish with this AI thing, I want to recommend reading this link, and this other one.

I also see it important to consult this one, where you can see in which subjects ChatGPT is already almost an expert compared to humans, … and there are not few in which it is above 80% of the experts.

The total connection of humans.

Another way to reach the Singularity would be to develop a kind of chip that would be implanted in our brains and that would allow the total connection of these human brains. So much connection that the thought would be common, and not differentiate whether the brain of each person has participated more or less. Just as now we do not differentiate whether our thoughts come from the right or left hemisphere of our brain.

This would mean that we would be a common brain, with information distributed in different hyperconnected biological beings. The concept of the individual would disappear.  

As with AI, such a common brain would have access to all human knowledge. Its ability to make correct decisions could be similar to that of AI.

On the other hand, contrary to the case of AI, because the concept of the individual is lost, connected humans would not be able to make decisions against this common brain, because we would be part of it. There could only be cases if there are failures in communication, or in the chips.

However, I see the possibility of reaching the Singularity through this path as much more distant in time than through AI, since those chips are less developed (although there is research, such as that of Elon Musk with Neuralink -always him-), in addition to the fact that billions of people would need to accept the implantation of that chip, which would be something much more complicated than vaccinating us against Covid 19 (although some say that those vaccines included chips, so maybe that work has already been done -for the record, I do not believe it-).

Descartes and his provisional morality.

With what I said about the Singularity, I wanted to make it clear that I think we know what is going to happen in a few decades. Whether we like it or not. And, that we should prepare ourselves as best we can think of. But that should not be our concern for the short and medium term.

With what has been said in previous entries, and what we all know, it is clear that the current structure of the world is malfunctioning in almost all aspects, and that changes must be implemented.

So, we know that we have to create a new social structure to be used during a transitory time of a few decades. That is, it is in our interest that this new structure be as easy and quick as possible to implement, because if it is not, we will not make it in time. We do not need it to be an absolute truth. It is enough that it allows us to live comfortably during this transition period.

This leads me to remember Descartes, and the genius in his book “Discourse on Method”. I am not referring to what we have all studied of the scientific method (the importance of which I do not deny), but to his “discovery” of “provisional morality”.

Descartes came to the conclusion that he was not going to believe anything until he proved it scientifically. But he knew he was not going to finish that work in his lifetime. So he decided that, while he was on the job, he needed to have some principles, even if he didn’t believe in them. He also realized that, if he did not believe in those principles, which he nevertheless needed for social coexistence and peace of mind, it was best to choose the principles that would take the least amount of time away from his scientific work. And, the genius is that he decided that those that took the least amount of his time were the ones that were best accepted by his contemporaries. That is to say, after not believing in anything, and seeking to change everything: he chose to live as always, at least in appearances.

I say this because there is a possibility that this is a way of acting until we reach the Singularity. Or, with small and easy changes. There is also the possibility that we will not be able to. We shall see, …

Readings that have interested me.

In the process of writing this entry I have come across many issues of other subjects that I would like to share. But, as it has been a bit long, I will leave them for the next ones, except for this:

This is as far as I go for today. 

As always, I welcome comments on my email: pgonzalez@ie3.org

If you have any feedback or comments on what I’ve written, feel free to send me an email at pgr@pablogonzalez.org.

You are allowed to use part of these writings. There’s no property rights. Please do it mentioning this websitte.

You can read another writings of Pablo here:

Esta web utiliza cookies propias y de terceros para su correcto funcionamiento y para fines analíticos. Contiene enlaces a sitios web de terceros con políticas de privacidad ajenas que podrás aceptar o no cuando accedas a ellos. Al hacer clic en el botón Aceptar, acepta el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos.
Privacidad