The structure of the global catastrophe


Chapter 19. Multifactorial scenarios



Download 1,95 Mb.
bet20/41
Sana27.06.2017
Hajmi1,95 Mb.
#16820
1   ...   16   17   18   19   20   21   22   23   ...   41

Chapter 19. Multifactorial scenarios

Above we have made as far as possible the full list of one-factorial scenarios of global catastrophe. There are also other variants of this list, for example, in N.Bostroma's article and in the book of J. Leslie, with insignificant differences. Now we should ask a question, whether scenarios in which the mankind perishes not for any one reason, and from some combination of factors, and if yes, what their probability and what these factors are possible. We will tell, whether can be so, what one continent superviruses will exterminate, another - nanorobotы, and the third will die out for hunger?



Integration of the various technologies, creating situations of risk

The fact of fast development of strong technologies creates a special zone of risk. Technologies tend to promote each other in development. We will tell, development of computers helps to count properties of new materials, and new materials allow to create even more productive processors for computers. In modern technology it is known under the name NBIC-convergence that is deciphered as nano-bio-info-cogno and means merge process nanotechnologyй, biotechnologies, computer technologies and researches of a human brain. This merge occurs for the account of an exchange of methods and results, and also realisations of the projects uniting elements of these technologies, for example, when a cover of viruses are used as elements for nanorobots, or by means of gene engineering mice with fluorescent markers in нейронах a brain for studying of processes of thinking are deduced. Convergence of technologies accrues on a course of progress and there is an allocation of quickly developing kernel of technologies (NBIC) which are capable to help everyone to everyone. Thus they can do the contribution both in nuclear, and in space technologies, but not receive from them the return contribution, and owing to it it is impossible rings of a positive feedback - and these technologies lag behind from мэйнстрима technological progress. Base for NBIC technologies is miniaturization. Convergence NBIC of technologies conducts to there is nobody peak which, possibly, the strong artificial intellect is.

Similar integration repeatedly took place in the past at weapon creation. Here technologies did not help with development each other, but created essentially new units. For example, the plane with a machine gun, the camera and a radio communication - as the scout and a fighter in the First World War. Or the intercontinental ballistic missile in which achievements in the field of the nuclear weapon have been united, the rocket technics and computers, each of which separately in one thousand times would be more safe. That is a nuclear out-of-pocket bomb of delivery, or a rocket with a usual warhead, or a rocket without prompting systems. (Thus it would be desirable to notice, that present reduction of nuclear arsenals is compensated by growth of their accuracy that raises their hurting force.)

Often available forecasts of the future and fantasy describe future as the present plus one any new line. With the same forecasts of global risks sin also: they describe occurrence in the world of any one dangerous technology and then consider consequences of this event. For example, as the world if in it will appear developed nanotechnologyи will change. It is obvious, that this approach is insolvent, as the future technologies, for the account of their joint development, will simultaneously appear and to enter with each other difficult interactions.

Thus parallel convergence takes place both consecutive, and. Parallel convergence takes place, when some new technologies unite to create qualitatively new product, for example, an intercontinental rocket with a nuclear charge. Consecutive concerns a chain of events in which one factors start others, for example: act of terrorism - an economic crisis - war - application of the biological weapon.

Pair scenarios

Let's consider to begin with hypothetical pair scenarios of global catastrophe, in other words, different variants взаимоусиления the major factors taken by steams. It is thus clear, that in a reality they will operate all together, but these steams can become "bricks" (or, more likely, communications in the column) for more difficult forecasting. We will give the outline description of such interaction, actually, as brain storm. Here each pair scenario should not be perceived as the definitive forecast - but not because it is too fantastic, that is why, that it does not consider influence of some factors.



AI and biotechnologies

Consecutive convergence (chain of events):

1. Genetically modified superpeople will possess superintelligence which will allow them to create present computer AI.

2. AI will create a super-virus as the weapon.

3. People will die out from a virus, and it is necessary to enter instead of them robots.

Parallel convergence: occurrence of new products on the basis of both technologies:

4. Biological assemblage of superdense chips will sharply accelerate AI growth.

5. Special viruses will establish created AI programs in a brain to people.

6. AI will create directly from biomaterials - нейронов, DNA.

AI and a superdrug

Consecutive scenarios:

1. For example, AI will want to please people and will create such drug. Or AI also will be such drug (the virtual reality, the Internet, operated dreams see).

2. In process of  destruction of people from a superdrug it is necessary to replace them on robots.

3. Or on the contrary, it is necessary to think up a certain super-TV to calm people who remained without work because of AI.

4. The superdrug will be the weapon of hostile AI against people.

Parallel convergence:

5. AI will think up the difficult combination of magnetic fields creating exact narcotic effect in a brain.

6. Communication of AI and a human brain through нейрошунт will essentially strengthen possibilities both that, and another. AI will get access to human intuition, and human - to unlimited memory and speed of thought.

Superdrug and biotechnologies

1. Manufacture of dangerous drugs becomes the same simple business, as cultivation of a tea mushroom.

2. The requirement of people for drugs will result in blossoming of the black market of biotechnologies which will in passing make accessible and manufacturing the bioweapon of mass defeat.

3. To disaccustom people to a superdrug, the special bioweapon hurting a brain will be sprayed.

4. A certain infectious illness one of the symptoms will have euphoria and aspiration it to extend.

Superdrug and nanotechnologyи

Stronger effect will give direct irritation of areas of a brain microrobots. Nanorobotы will create systems which will deduce the information from a brain outside (нейрошунты) that will allow to create even more powerful tools of entertainments. (It is interesting, that as the development program nanotechnologyй in Russia affirms, that the market of such devices by 2025 will reach billions dollars.) however as a whole the same scenarios, as with biotechnologies here operate.



AI and nanotechnologyи

1. Nanorobotы will allow to read details of the device of a human brain that will accelerate AI development.

2. AI will help to develop and let out superefficient nanorobots.

3. Progress in nanotechnologyях will support Moore's law long enough that computers have reached productivity, repeated surpassing productivity of a human brain at the lowest price.

4. Nanorobotы also will be real carriers of AI - something will turn out an average between intelligent ocean in the spirit of лемовского the Solaris and the scenario of Grey goo.

5. Hostile AI uses nanorobots as the weapon for an establishment of the power on the Earth.



AI and the nuclear weapon

1. AI will think up how to make the nuclear weapon (ЯО) easier, faster and more cheaply.

2. The scenario, in the spirit of offered in a film «Терминатор»: AI uses ЯО that will get rid of people.

3. People use ЯО to try to stop left from under AI control.



Nano and biotechnologies

1. Live cages will collect details nanorobots (to synthesise in special ribosomes).

2. Will appear «аниматы» - the artificial life containing elements as live, and nanorobots.

3. Only nanorobotы will give definitive protection against the biological weapon.



Nanotechnologyи and the nuclear weapon.

1. Nanotechnologyи will allow to simplify division of isotopes and designing ЯО.

2. Attempts to struggle with flights nanorobots by means of nuclear attacks will lead to additional destruction and earth contamination.

Nuclear the weapon and biotechnology

1. The nuclear weapon can be applied to destruction of dangerous laboratories and sterilisation of the infected spaces.

2. Bioworkings out can be applied to extraction of uranium from sea water and for its enrichment, and also for allocation of plutonium from the fulfilled fuel. Or territory deactivations.

3. Nuclear war occurs in world strongly infected with biological agents. War does impossible adequate rate of manufacture of vaccines and other boards, and simultaneously leads to intensive migration of people. Resources which could go on protection against microbes, are thrown on protection against a radiating irradiation. Many people are weakened.



ЯО and supervolcanoes

By means of a hydrogen bomb it is possible to provoke explosion of a supervolcano or strong earthquake. Or on the contrary, to direct its energy on the bypass channel.



ЯО and asteroids.

1. By means of ЯО it is possible to reject an asteroid from the earth, or on the contrary to direct it to the Earth.

2. Asteroid falling can be apprehended as a nuclear attack and to lead to the casual beginning of nuclear war.

3. The asteroid can destroy also nuclear station and cause contamination.



AI and system crisis

1. Application of supercomputers will create a certain new type of instability - fast and not clear (in military sphere, in economy, in the field of prognostiki-futurology).

2. War or war threat will result in race of arms in which result the most destructive and dangerous AI will be created.

3. All world appears is dependent on a global control system which then collapses hackers. Or the command is given to it about self-damage.



ЯО and system crisis

1. Any explosion of a nuclear bomb in a city can bring down the world financial markets.

2. On the contrary, the collapse of the markets and financial crisis can lead to a fragmentation of the world and strengthening of temptations of power decisions.

ЯО and a climate

1. It is possible to cause purposely nuclear winter, having blown up a powerful nuclear charge in a coal layer that is guaranteed will throw out in atmosphere a soot large quantity. If the theory about «nuclear winter» as a result of attacks to cities is true, such action will be in tens or hundred times is more effective on a soot exit.

2. It is possible, possibly to provoke and irreversible global warming by means of correctly chosen places for nuclear attack. For example, it is known, that after nuclear winter probably nuclear summer when soot will settle on glaciers and will cause their heating and thawing. Explosion of bombs in files of gas hydrates under an ocean floor too can cause chain reaction of their liberation.

3. On the contrary, it is possible to regulate a climate, provoking emission of sulphur and ashes volcanoes by means of nuclear charges (but it already to chains of three elements).



Studying of global catastrophes by means of models and analogies

Global catastrophe of a technological civilisation, the leader to human extinction - the unique phenomenon of which never was in history that complicates its research. However we can try to pick up a number of other events which will be similar to global catastrophe in some aspects, and to collect, thus, a number of models. Sample such is subjective enough. I suggest to take as analogies the scale, difficult, in details studied and known events. It:

Extinction of dinosaurs

Extinction of Neanderthal men

Crash of Roman empire

Disintegration of the USSR

Crisis on Easter island

Crash of American Indian civilisations of America after discovery by its Columbus

Explosion in Chernobyl

 destruction of "Titanic"

Explosion of a supernew star

Occurrence of mankind from the point of view of biosphere

the First World War Beginning

the Cancer as illness

These events can be assimilated global catastrophe in different aspects. Intelligent beings participate in one of them, in others is irreversible the whole kinds die out, in the third to crash approach сложноорганизованные systems, difficult technologies participate in the fourth. On each of the named themes it is a lot of literature, and it is inconsistent enough. In each case there is a set of hypotheses which explain all through any one reason - but as it is a lot of such hypotheses any reason is not really unique. More likely on the contrary, the general in all named variants that there was no one reason is: than more we penetrate into details, the set of factors which have led to the end and which co-operated with difficulty is especially distinguishable. About each of these catastrophes books are written, and the disorder of opinions is considerable, therefore I will not try to retell all possible representations about the reasons of all these catastrophes, and I send the reader to corresponding literature among which it is possible to allocate the recent book "Collapse" Даймонда. About extinction of dinosaurs it is necessary to look the corresponding chapter in K.Eskova's book «History of the Earth and a life on it».

The general in all these cases is that was present difficult комрлекс the reasons both external, and internal character. Integrated approach of these reasons creates problems when we try to answer questions in the spirit of «Why Roman empire has broken up?» Also it is the most important lesson. If we face catastrophe which will ruin a human civilisation, most likely, it will occur not for any one reason, and owing to difficult interaction of the different reasons at different levels. Hence, we should try to create models of the same level of complexity, as what are used for the description of already happened large catastrophes.

First, it is important to notice, that the main role in вымираниях and catastrophes was played by the factors making basic properties of system. (For example, dinosaurs have died out not from outwardly casual reason - an asteroid, and from their most defining property - that they were huge and яйцекладущие so, were vulnerable to small predatory mammals. The asteroid was only the occasion which has opened a window of vulnerability, and steadier kinds have gone through it, for example, crocodiles. Human falls ill with a cancer, not because it had a wrong mutation that is why that it by the nature consists of the cages capable to division. If not specificity of American Indian culture without a wheel, horses and progress not Columbus would come to them, and they - to Columbus.)

The idea about that defining properties of system set that type of catastrophes which with it can happen, sets thinking, what defining properties of a human kind and a modern civilisation. For example, that the plane by definition flies, - sets the most typical catastrophe for it - falling. And for the ship the most typical risk will sink. But much less often the ships break, and planes sink.

So, recognising that any of these catastrophes has not been caused by any one simple external factor, and had the reasons in defining properties of the system (which were, accordingly, "are smeared" on all volume of system), we can draw the important conclusion: one-factorial scenarios of global catastrophe are not so dangerous, how much dangerous «defining properties of systems» and the system crises connected with them. Feature of system crisis consists also that it automatically involves in itself(himself) all population and universal "delivery systems" are not necessary to it.

On the other hand, we can tell, that all these factors are unimportant, as all empires all the same fall sooner or later, kinds die out, and beings perish. But these data for us are useless, as speak nothing how to make so that it became "late", instead of "early".

Secondly, though internal contradictions of system could become ripe very long, were necessary external and enough random factors to push her to  destruction. For example, though the ecological niche of dinosaurs was steadily reduced on the logic of this process, falling of an asteroid and eruption of volcanoes could push this process even more. Or a freezing which has pushed Neanderthal men to extinction, simultaneously with pressure from sapienses. Or Chernobyl failure which has undermined the USSR in the greatest vulnerability. And if these external random factors was not, the system could and not pass in other channel of the development.

Thirdly, in all cases when it was a question of intelligent management, it appeared, anyhow, not so intelligent. I.e. made the solving mistakes conducting to catastrophe. Besides, often catastrophe is connected with simultaneous "casual" coincidence of the big number of diverse factors which separately did not conduct to catastrophe. At last, pathological self-organising when destructive process amplifies at each stage of the development can be peculiar to catastrophic process.

It is interesting to study as well the mankind in creation of systems which never suffered catastrophes that is at which designing the trial and error method was not used was how much successful. Alas, we are compelled to exclude set of systems which were created as catastrophe-free, but have as a result led to catastrophes. It is possible to recollect nuclear reactors, spaceships "Shuttle", suhumanic "Concordes". Is better maintenance of safety of the nuclear weapon looks, but also here there were some incidents when the situation was that is called, on the verge. The further studying of analogues and models of global catastrophes on set of examples Is represented productive.

Inevitability of achievement of a steady condition

It is possible to formulate the following plausible statement: most likely, soon the mankind will pass in such condition when the probability of global catastrophes will be very small. It will occur in following cases:

We will understand, that any of global catastrophes has no high probability under any conditions.

We will find a way to supervise all risks.

Catastrophe all the same will occur, and to perish there will be more nobody.

We will reconcile to inevitability of global catastrophe as a part of natural vital process (so, for example, last two thousand years Christians waited for the Doomsday, and even rejoiced its affinity).

However, while we observe the opposite phenomenon - possibilities of people on creation of destructive agencies so and погодовая the probability of global catastrophe, constantly grows. And the population and protection frames grows faster, than. If we count this curve of growth it will have too a certain peak. It is possible to take for comparison scale of victims from the first and the second world wars. We увидем, that for 25 years the number of victims of the maximum realised destruction has grown approximately in 3,6 times (if to take an estimation in 15 and 55 million victims accordingly). It advances population growth. However with development of the nuclear weapon this acceleration has gone even faster, and by 60-70 years it was really possible to destroy hundred millions people (in real war all population of the Earth as aim would be lost not to exterminate all it was not put). So, if to take rate of acceleration of force of destruction in 3,6 in 25 years we will receive acceleration in 167 times for hundred years. It means, that by 2045 war will be capable to destroy 9 billion people - that comparably with total of the population of the Earth expected on this moment. This figure is close to expected technological Singularity around 2030 though it is received absolutely in another way and with use of data only first half of XX-th century.

Therefore we can переформулировать our thesis: growth of probability of risk factors cannot eternally proceed. It is possible to formulate it and differently: means of preservation of stability should surpass self-damage means. If destructive agencies appear more powerful the system will fall on such level where forces of ordering will be sufficient. Even if it will be burnt desert. With the account временнóго the factor it is possible to tell, that means of maintenance of stability should grow faster, than self-damage means. And only in this case погодовая the probability of extinction will fall, and its integral in time will not aspire to unit that means possibility of infinite existence of mankind, that is realisation of its problem неуничтожимости.



Recurrent risks

Any global risk which has been listed by us in first half of this text, becomes much more dangerous if it arises repeatedly. There is a big difference between unitary leak of a dangerous virus, and thousand leaks of the different viruses occurring simultaneously. If will flow away and one virus with летальностью in 50 % will extend, we will lose to half of population of the earth, but it will not interrupt development of a human civilisation. If during a life of one generation will be such 30 leaks in live remains - most likely - only one human. If them are thousand гарантированно anybody even if летальность each separate virus there will be only 10-20 % will not survive (provided that all these viruses will extend on all planet, instead of will settle in areas). The Same it is possible to tell and about falling of asteroids. Bombardment by a long series from tens asteroids of the average size falling of one big will be ready летальнее for mankind, than.

Certainly, it is necessary to consider ability of mankind to adapt to any one threat. For example, it is possible to succeed in opposition to absolutely all biological threats - if it is a unique class of threats. However possibilities of creation of universal protection against global risks are limited. After September, 11th in the USA began to make the list of vulnerable objects and have quickly understood, that it is impossible to protect all objects.

As development of technologies goes in common, we cannot count, that any one key technologies will arise, whereas all the others remain at the same level, as now. (Though usually such image is created by fantastic novels and films. It is an example of "bias of the thinking caused by good history».)



Global risks and problem of rate of their increase

Global risks are game on an advancing. Everyone new technological свершение creates new global risks and reduces the former. The outer space exploration has reduced risk of casual collision with an asteroid, but has created possibility to organise it purposely. Distribution nanorobots will reduce threats from genetically modified organisms, but will create even more dangerous weapon. The artificial intellect will solve control problems over other dangerous technologies, but will create such monitoring system, any failure in which work can be mortally dangerous. Development of biotechnologies will give the chance to us to win all illnesses which were before in hands - and to create the new.

Depending on what technologies will arise earlier or later, different forks on a way of the further development of a civilisation of technological type are possible. It is besides, important, whether new technologies will have time to solve the problems created at the previous stages of development, first of all - problems исчерпанности those resources which have been exhausted in process of the previous technologies, and also elimination of the risks created by last technologies.

Earlier with mankind there was a set of all possible situations on somebody a stage of its historical development, for example, all set of interactions of the big state to nomads. Now we appear, apparently, in a situation of occurrence of real historical alternative - if there will be something one something another at all will not be. Or it will be created powerful, supervising all AI, or all will be eaten by grey goo. Or we become a space civilisation, or we will return to the Stone Age.

The global risk arises owing to speed of process creating it. With slow process of distribution something it is possible to have time to consult, prepare correct bombproof shelters, to grow up a vaccine. Hence, to distinguish the present global risk it is possible on rate of its development (Солженицын: revolution is defined by rate.) this rate will be stunning because people cannot have time to understand in case of global risk, that occurs and correctly to prepare. However for different classes of events different speeds will be stunning. The event is more improbable, the its smaller speed will be stunning. The USSR seemed something so eternal and firm, what even the crisis stretched on much years and crash of the Soviet system seemed stunning. System crisis, in which the point maximum катастрофичности constantly moves (as the fire, being thrown from one object on another), possesses much bigger stunning potential.

Thus it is necessary to understand ability of events as shock perception of system crisis to make wrong impression about itself, probably, in shape «a future shock». And accordingly to cause wrong reaction to them, even more their strengthening. Certainly, some will understand at once an event essence, but ошеломлённость means disintegration of a uniform picture of an event in a society, especially at the authorities. Therefore there will be a blinding and voices «Кассандр» will not be heard - or will be clear incorrectly. Faster processes will supersede slower, but not always the attention will have time to be switched to them.



Download 1,95 Mb.

Do'stlaringiz bilan baham:
1   ...   16   17   18   19   20   21   22   23   ...   41




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish