just read this first paragraph you will get it
Abstract
:
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity from more speculative beliefs about the possibility of creating artificial general intelligence. I use the theory of metasystem transitions and the concept of universal evolution to analyze some misconceptions about the technological singularity. While it may be neither purely technological, nor truly singular, we can predict that the next transition will take place, and that the emerged metasystem will demonstrate exponential growth in complexity with a doubling time of less than half a year, exceeding the complexity of the existing cybernetic systems in few decades.
1. Introduction
Technological progress is visibly accelerating. There are many exponential trends in addition to the commonly known Moore’s law, including the increase in the number of computers connected to the Internet, or of the amount of data acquired by neuroimaging technologies. Moreover, the more the technologies develop, the steeper the exponential growth becomes. So, the overall progress seems not simply exponential, but hyper-exponential, asymptotically going to infinity within a finite time period, namely, within few decades (see, e.g., [1]). A point at which a function is not defined is called a singularity in mathematics. By analogy, a hypothetical point at which technological progress becomes unbounded is called a technological singularity (Singularity).
The idea of Singularity excites many people (including myself), who naturally try to speculate about its possible implications. Some speculations may seem to go too far (e.g., [2]) with strong but ungrounded statements and predictions. However, this should not devalue the idea itself, and criticism of such ungrounded claims should not be considered an argument against the Singularity correctly understood.
One of the most frequently encountered misconceptions consists in identifying the Singularity with the creation of artificial (super) intelligence. Even the Wikipedia article [3] on the technological singularity starts with the statement that the invention of artificial superintelligence will be the cause of Singularity (although it later indicates that Vernor Vinge, who popularized the notion of Singularity [4], wrote about other ways to Singularity).
While artificial superintelligence is one possible path to the Singularity, it is not the only one. Many critics identify the Singularity with this one possibility (e.g., [5]), which itself is controversial, at least, if it is taken to imply that this superintelligence must be achieved by modern computers. The Singularity will not necessarily come about through the creation of Strong AI with digital computers.
The responsibility for such misconceptions lies with the adepts of the Singularity themselves, since they not infrequently assert all their beliefs and desires simultaneously, assuming that they can thus reinforce each other. At the same time, their critics seek the weakest points assuming that refuting them will render the whole concept invalid. They are also usually biased by their desires which can be simply expressed as “Singularity is impossible because we do not want humans to disappear.”
Here, I will try to disentangle the grounded claims from the personal beliefs and desires, and underline what we can really say about Singularity. In particular, I will give a definition of Singularity in terms of metasystem transitions as an objective phenomenon without referring to any particular technology.
2. What Do We Know
2.1. Metasystem Transitions
Although the theory of metasystem transitions is rarely mentioned in connection with the concept of Singularity (e.g., [6] mentions it specifically in the connection with the emergence of the Global Brain), it is an essential scientific foundation of this concept. This theory, proposed by Valentin Turchin in [7], was originally based on the study of the evolution of cybernetic (control) systems in the evolution of nervous systems. Evolution of these systems takes place through a sequence of metasystem transitions, each of which consists of a creation of a higher-level control system that chooses between states or different exemplars of already existing lower-level control system.
Let us consider a cybernetic system that has an internal state, for example, a spatial location. Control of this location by effectors is defined as a “motion,” which itself initially is uncontrollable. Control of the motion is defined as an “irritability,” which is enabled by the development of sensors. In time this leads to the development of control of the irritability as a simple reflex system, a coordinated but rigid reaction of effectors to certain patterns of sensory input. Control of multiple simple reflexes leads to the development of a more complex/conditional reflex (association). Control of associations is defined as “thought.” And the control of thoughts (as defined in this cybernetic perspective) creates a culture (see Table 1).
Table 1. Stages of evolution [7].
In addition, Valentin Turchin considered some sequences of metasystem transitions within culture (especially in mathematics), and we can identify similar metasystem transitions in many other systems as well. For example, different levels of gene control emerged during biological evolution, while the Internet can be considered as a metasystem with respect to computers.
Some extensions of this theory exist (e.g., [8]), but we will not go into detail here.
2.2. Timeline
What is missing in the theory of metasystem transitions is a timeline. The concept of Singularity is usually justified by timelines that track some key events in evolution supplemented by some qualitative measure, for example, memory capacities. Although different authors choose different key events as indicators, curves of growing complexity or decreasing time intervals between paradigm shifts as measured by key events are consistent as shown by Ray Kurzweil with 15 lists of key events [1].
These findings, the details of which I will not reproduce here, suggest that metasystem transitions representing global or universal evolution (e.g., [9]) follow two regular patterns:
- Systems with a certain level of control grow exponentially (at least, before the next metasystem transition) in their capacities or complexity.
- The time before the next metasystem transition decreases geometrically and the growth rate increases geometrically from transition to transition.
The Singularity is thus the point at which these patterns cease to exist.
These regularities are quite well grounded in the empirical data. It is difficult to deny that, for example, both the number of neurons in nervous systems and the number of transistors in computers have been growing exponentially, and the doubling time of the latter is much shorter. What is still uncertain is the significance of these observations and conclusions that can be drawn from them.
3. Predictions
We will distinguish two types of predictions about Singularity, namely, the extrapolation of its timeline and qualitative depiction of its possible scenarios (see, e.g., [1,4]).
3.1. Timeline Extrapolation
Extrapolation is probabilistic induction from past trends. If we do not use additional information, then the simplest extrapolation is the most probable one. Here, the simplest extrapolation suggests an accelerating sequence of metasystem transitions, such that complexity will grow to infinity in a finite period of time. True Singularity is this imaginary point, the date of which is somewhat uncertain, but most evidence suggests (see, e.g., [1]) that it is not more than few decades away.
There are many studies that try to predict when artificial general intelligence will emerge (e.g., [10,11,12,13]). However, I will not rely on these predictions here in order to not be involved in the controversy regarding the very possibility of thinking machines.
More interestingly, study [14] showed the synchronicity of different approaches to predicting long-term trends (including economic cycles, environmental and generational analysis besides purely technological trends) suggesting that there will be a technological surge in the 2040s, which might correspond to the Singularity if certain technologies become available at the time.
According to the current scientific picture of the world nothing can be truly infinite, so, such a “True Singularity” is thought to be physically impossible. Of course, our model of the world may change in future, but at this point the simple extrapolation of one curve does not provide sufficient grounds for changing it. Rather, it is more likely that this extrapolation will not continue indefinitely.
It is quite likely that the growth will decelerate at some point. This does not invalidate the concept of Singularity, because something that is not actually infinite can be close enough to infinite for any practical purpose. Thus, the real question is how high the complexity of cybernetic systems will grow.
The second simple extrapolation is an S-shaped curve. This posits that growth is exponential for a period of time, but that is slows down as it approaches some limit. There is reason to believe that this S-shaped curve is the usual pattern in the growth of complexity (as indicated, e.g., in [14]) in metasystem transitions, and this pattern is repeated in a fractal-like way on different time scales.
On a human time scale, the shape of the long-term curve is not critical. For humans, it does not even matter if the curve will saturate (or even fall down) at some point or will be unbounded. What does matter is when and how fast it will decelerate. There are no reasons to believe that such deceleration will be very rapid or abrupt.
The S-shaped curve is quite a conservative extrapolation, and we do not see signs that deceleration has already started. Thus, we still have not passed the inflection point, and after this point we will see slower but still rapid growth (excluding catastrophic scenarios). Thus, if the inflection point is a few decades off, this will be enough for an “Essential Singularity,” after which the cybernetic systems at the cutting edge of universal evolution will become far more complex than the currently existing systems.
3.2. Possible Scenarios
No specific scenario can be considered to be a justified prediction, and thus criticism of a scenario cannot be used to criticize the general concept of Singularity. Does this mean that this concept cannot be used to make testable predictions, that is, that it does not satisfy Popper’s criterion of falsifiability and thus is unscientific? Not precisely. We cannot say which specific metasystem transition will take place, but we can predict that some transition will most likely take place within a certain time range, and the emerged metasystem will demonstrate the exponential growth of its complexity with the doubling time less than half year exceeding the complexity of the existing cybernetic systems in few decades.
Nevertheless, we can try to assess which scenarios are relatively more or less probable. All scenarios are based on so-called Singularity technologies, that is, technologies that accelerate their own development, a phenomenon that usually depends on some form of superintelligence. For example, genetic engineering can help smarter humans to appear, who will then accelerate genetic research resulting in the emergence of even smarter humans.
Broad classes of possible Singularity technologies include bio-, nano-, info-, and maybe some other technologies, and their combinations. For example, one can talk about nanorobots populating human brains enhancing their capabilities, or about autonomous artificial general intelligence (AGI) optimizing itself and its own hardware. These technologies have different doubling times. For example, years are needed for genetically modified humans to be born and taught. Other forms of superintelligence can emerge much faster rendering the genetic modification route obsolete or supplementary especially taking the social factors associated with genetic engineering into account.
Of course, such an analysis is far from certain since it cannot take unknown future technologies into account, and does not consider all interactions between different technologies. It also does not take social, geopolitical, and economic factors into account, which might be necessary for predicting the future (see, e.g., [14]). Nevertheless, it can give us an educated guess about which technologies have a smaller doubling time and are most likely to lead to the next metasystem transition. However, my point here is that such predictions should not be used to criticize the concept of Singularity as such.
An additional source of prediction is the theory of metasystem transitions. For example, one might argue that the next metasystem will be “Control of Cultures.” One can further argue that this is already happening in sense of humans interacting through the Internet with each other and with artificial agents, or that it will happen in a form of “Global Brain” (e.g., [6,15]). Although this looks like a logical consequence of the theory of metasystem transitions, this theory is not detailed enough to describe and predict the “hardware” of metasystems. For example, it says nothing about how nervous systems emerged as new hardware of cybernetic systems supplementing DNA. Similarly, it does not tell us what hardware is suitable for the level of culture and its consequent metalevels.
Formerly, the culture was “executed” by human brains augmented with external artefacts such as books. These cultural networks were similar to gene networks, but not to neurons. Will human brains still be the hardware for the cutting-edge of the universal evolution perhaps by being directly connected to each other? Or will computers become a metalevel system to control our culture through recommendations, and so forth? Will humans still be a part of the next metasystem, or will this system leave humans on the verge of universal evolution as an inefficient implementation? The theory of metasystem transitions does not provide definite answers to these questions. It simply says that most likely the next metasystem will be based on human culture, but does not say how exactly this will be implemented.
4. Misconceptions
4.1. True Singularity
The concept of “True Singularity” (taken not just as a simplified model, but as the reality) has a religious aspect, since it implies an emergence of an infinitely powerful god-like entity. As we have seen, there is minimal supporting evidence for this because it contradicts whole volumes of scientific data. Of course, hardly anyone believes in “True Singularity” in its ultimate form, but there is its soft version, when this entity remains finite, but occupies the whole Universe. This assumes that the speed of light limit can be overcome thanks to the development of “ontotechnologies” modifying the reality itself and its physical laws (why not if we can modify our genomes?). And the awakened Universe starts to communicate with other sentient universes within Multiverse. Although such ideas have some grounds in physical theories (regarding the possible place of our Universe in Multiverse [16,17]), they are just speculations.
Such ideas are fun, but should not be considered as real predictions. Vice versa, their implausibility should not be considered as a counter argument against Singularity per se.
However, it should be also noted that although many definitions of Singularity do not explicitly refer to the asymptotical technological progress, and the formal asymptotic limit of truly infinite progress cannot be achieved, mere exponential growth is not enough to achieve Singularity as discussed in [18].
4.2. Humans
What is the “Essential Singularity”? We can say that this is the point “in the history of the race beyond which human affairs, as we know them, could not continue” as it was formulated by Stan Ulam with the reference to his conversation with John von Neumann more than 60 years ago. However, this definition is far from definitive. On the one hand, many human affairs are quite different now from 200 years ago. On the other hand, some activities conducted by humans, such as science, could continue even without humans, at least as humans exist today. Will a genetically modified or augmented human still be a human? Is a human who uses a computer or a paper still a pure human? These are rhetorical questions.
We cannot define the “Essential Singularity” relative to humans who are permanently changing as a part of a larger metasystem. Maybe Singularity has already taken place in accordance with the 60-year-old definition. Whether we put it in such a way or not will not affect the reality. Of course, for humans, the fate of human life does matter, but this is difficult to predict. What we can say is that the universal evolution will continue, and metasystems transitions will take place leading to cybernetic systems of much greater complexity than that of a single human without tools.
4.3. We Have Choice
Universal evolution lasts for billions years. Its laws of metasystem transitions are not rigid, but they are objective. Evolution happens independently of our desires. Of course, humans are much more sentient beings than DNAs or single neurons. It seems we can choose. But can we choose to stop universal evolution? Hardly. Different people have different opinions on the question of how much we can shape evolution. Corporations and countries have their own interests. It is difficult to imagine that the development of all Singularity technologies (or, rather, all technologies) will be prohibited in all countries, and will not be performed by anyone in the world.
Similarly, people favor some scenarios over others, because they like them better. For example, transhumanists prefer to talk about brain uploading considering AGI as not too relevant or even as a threat. I do not try to assess the relative likelihood of these two scenarios, but simply compare them since they belong to the same paradigm (executing intelligence on computers). Modeling natural neurons on computers requires enormous computing resources. The computational resources needed for a human-level AI will be available much earlier than those required for executing an uploaded brain in real time. To reduce overhead, we need to precisely understand how to abstract away all the biochemical and physical details (e.g., 3D protein folding, which is extremely difficult task, but which is needed to model gene expression necessary for memory consolidation). Thus, we should already have a detailed model of (human) intelligence to do this.
AGI as a Singularity technology will also have a shorter doubling time, because knowledge of its design principles will enable easier self-optimization or extension with additional modules, sensory modalities, and so on. Thus, whether we want this or not, AGI will emerge earlier and evolve faster than brain uploading or whole brain emulation (if either of these is ever possible).
Governments, corporations, scientific societies and others can influence the speed of development of different technologies through financial support or restrictions, but this does not affect the inherent objective properties of these technologies, which play a major role in the pathway followed by universal evolution. Predictions of the plausibility of different scenarios should be based on the detailed comparison of the properties of different technologies and their possible mutual influence. We need to assess which technology is expected to appear earlier and develop faster and how this will influence the development of other technologies, and so forth This does not depend on our desires or preferences.
4.4. Artificial Superintelligence
As was mentioned, Singularity is frequently associated with the creation of artificial superintelligence (and even justified by it, for example, [19,20]). But this is also the source of criticism of the concept of Singularity itself [5].
The textbook example of a computer-based AI, which designs new faster computers and runs on them to design faster computers faster, and so forth, is just an illustration to the concept of “intelligence explosion.” However, any other Singularity technology or a set of technologies can be substituted. For examples, humans use computers to conduct genetic research and to improve computers resulting in both smarter humans and faster computers accelerating both directions of research with positive feedback.
In this connection, I would like to make two claims:
- the concept of Singularity understood as a sequence of accelerating metasystem transitions does not depend on the idea of superhuman strong AI, and can be defended independently;
- the idea that superhuman general AI can be created in few decades is justified by evidence of the doubling times of different singularity technologies.
One might think that the second claim says the same thing as the above mentioned Wikipedia article [3]. But this is not really the case, because the causal relations are different. If one simply says that “the creation of AGI will lead to Singularity,” then if we call the possibility of the creation of AGI into question, we will doubt even more about the coming Singularity.
On the contrary, we can substantiate the concept of Singularity independently of our assumptions about AGI, so even if we lean towards a negative answer to a quite controversial question about the possibility of AGI based on digital computers, this will not affect the plausibility of the concept of Singularity. Then, we can provide arguments that AGI (or, rather, non-human superintelligence) is most likely to emerge first. This is the independent (and weaker) claim, the possible fallacy of which does not affect the first claim.
Indeed, we saw that the concept of Singularity can be introduced independently of the concept of artificial superintelligence. The necessity for superintelligence to be artificial is an additional independent premise. Also, if superintelligence is not posited to require individual consciousness or strong integrity, then one can claim that such superintelligence has already been here for a long time, and is constantly becoming smarter and smarter (i.e., many tasks that were impossible for a mind armed only with pen and paper, have become doable with current technology), and for us there is no reason to believe that this process will suddenly terminate.
However, purely artificial superintelligence is also possible. There are no fundamental restrictions preventing this, especially, if we consider not only existing computers but also possible future computers, which can be based on other (possibly unknown now) physical processes. Even opponents of Strong AI such as John Searle and Roger Penrose addressed their criticism only to digital computers and not to all possible computing devices in principle. One can also add (e.g., [19]) that artificial superintelligence might not be necessary a Strong AI, but can be just a general AI, to which most criticism (based on subjective aspects of human intelligence like consciousness, qualia, etc.) is not applicable.
It can be regretted that progress does not enhance all the components of the human mind in equal degree. It is curious to note that the components that are the least affected are those that may more difficult to reproduce with computers. These are emotional intelligence, sense of humor, and so forth. I will not try to dispel these doubts here, but simply note that there are different opinions on this topic, and that theories of artificial creativity, curiosity, and fun exist (e.g., [21]). My main point here is that this is not a reason to deny the likelihood of progress per se. One can complain about the one-sidedness of this progress. One can also argue that it should not be called progress. However, this does not negate the fact of (hyper) exponential technological growth and, consequently, the concept of Singularity. It can also be posited that, for further progress, a strong AI (possessing all human qualities) might be not really necessary.
Thus, it is unscientific to claim that artificial general superintelligence in any form is strictly impossible, and disprove the concept of Singularity on this basis. However, we should not also claim that AGI, especially based on digital computers, would be an inevitable step towards Singularity.
Although personally I do believe that AGI can be created on the base of digital computers and it is most likely step towards Singularity due to the shortest doubling time, this is really a belief that might be false, so I neither want to defend it here, nor do I want its controversy to cast a shadow on the concept of Singularity.
5. Conclusions
One might choose to define a scenario with the creation of autonomous artificial superintelligence as Singularity, but others could define a Singularity as any scenario with the creation of any kind of superintelligence. Such discrepancy can be a source of controversy. Further, we can understand in different ways, what is “artificial” or what is “superintelligence”. We should not argue about definitions, but should be precise in what we claim.
Here, I have tried to disentangle two types of claims which can be defended independently, namely, the claims about the character of the technological progress (or, rather, universal evolution), and the claims about artificial intelligence.
I do not defend claims about AI here (although I found it necessary to mention some of them), and mainly focus on what we can say about Singularity, namely: some metasystem transition will most likely take place within a certain time range, and the emerged metasystem will demonstrate exponential growth of its complexity with the doubling time less than half year (implying that its hardware will not be limited to the biological components) exceeding the complexity of the existing cybernetic systems in few decades. Most likely the next metasystem will be based on exponential change in human culture (although this does not mean it cannot also involve an artificial superintelligence). One way or another, further metasystem transitions will take place, although their growth rate will start to decelerate at some point.
Will this future metasystem transition be a Singularity? It depends on definitions, and on a which scenario takes place which is difficult to predict. Thus, it is useless to argue about whether Singularity as a specific event will occur and (if yes) when. Strictly speaking, Singularity is a virtual time point at which the simplest extrapolation of the curve of growing complexity hits infinity, which will be never really achieved. However, all models in science describe the reality approximately, and they should not be criticized for this. Behind the concept of Singularity is the real phenomenon of accelerating universal evolution, which should not be discarded just because Singularity is a very simple predictive model which does not exhaust the phenomenon. All the criticism should be addressed to the use of the model independently of the specific scenario to which it is applied.
open source article <-->
Comments
Post a Comment