6. Erroneous representations that anybody in the world is not engaged such «малоперспективной» in a theme as AI
It is known some firms and people which actively work over creation of universal AI - Numenta, Novamenta, SIAI, a2i2. More detailed review of programs on AI creation see in the head about risks of AI.
7. Erroneous representations that AI are different concrete appendices, like technics of recognition of images
Here there is a substitution of theses. In the given book under «AI» the Artificial intellect means. That someone extends the workings out under a brand "AI" though they actually it are not, does not follow, that AI is impossible. In the English-speaking literature term GAI - General AI is extended - the Universal artificial intellect which is intended for elimination of this ambiguity, also suggest to use the term «artificial reason».
8. Антропоморфизм
Unconsciously we humanise AI set of different ways, and it forms our expectations. See in article Yudkowsky in the appendix more in detail. In particular, we perceive AI as object which somewhere is, has accurate borders, the purposes etc.
9. Erroneous representation what enough to disconnect AI from a food that it to stop
This statement is based on the assumption, that programmer Ii will know, when process has gone incorrectly - obviously, incorrect. The second incorrect assumption - about locality of AI. The third - that AI cannot protect the food, either having masked, or having left in a network. The fourth - that the programmer cannot be in collusion with AI (and-or will deceive it).
10. Erroneous representation that, even having extended on the Internet, AI cannot influence an external world in any way
Incorrectly - on the Internet it is possible to earn money and to order any actions in an external world. The contract with people, blackmail and a direct control of mechanisms is besides, possible.
11. Erroneous representation that AI cannot have own desires, therefore it never begins to do to human harm
That AI has earned, before it will put certain problems. In the course of their performance it can realise those or others подцели. These подцели can be very dangerous, if on them несформулировано correct restrictions. The most known example - nobody AI charge to prove hypothesis Римана and for the sake of this purpose it transforms all substance of Solar system into computers.
12. Erroneous representation that AI will master space, having left the Earth to human
This good wish, but in it already is bitterness of capitulation. There are no bases to think, that AI is obliged also it actually it will do.
13. Erroneous representation that any AI is intelligence, therefore it possesses the purpose X (to substitute necessary), and this blessing
The intelligence is a tool which can be directed on achievement of any purpose. People use the most powerful intelligence to reach the primitive purposes which are peculiar to the alpha male of monkey's herd - to heat competitors, to achieve an arrangement of females, to get livelihood - and for the sake of all it verses are written, theorems are proved, plots trudge. Therefore presence of intelligence does not mean any unequivocal purpose. (And to think so, means to pass from the facts to obligations that comprises always a logic error.) and especially abstract purpose (to learn the world, for example) cannot be the concrete blessing for all people for all depends on ways of realisation of this purpose.
14. Erroneous representation that modern computers are very limited in the possibilities, therefore AI will be only in the long-term future - through tens or hundreds years
As we do not know, that such AI, we do not know, what exactly should be invented, that it to create, that is why we can not state exact time forecasts. AI can arise and tomorrow - the company a2i2 plans to finish the project in 2008, other projects are guided for 2011 Existing rates of progress in the field of creation of powerful computers are sufficient to create the computers close on productivity to our brain, the next years, and there are no inevitable reasons on which growth of capacity of computers should will be slowed down.
15. Erroneous representation that progress in understanding of work of a brain goes very slowly. Therefore AI will work very slowly
But the slowness of preparatory processes does not mean a slowness of the process. Yudkowsky in article which you will find in the appendix, denies it a difference example between time of working out of the nuclear weapon and speed of processes in a bomb.
16. Erroneous representation that human is capable to do X (to substitute necessary) that never can do AI and consequently AI does not represent any threat
«X» can have in different interpretations of quality of inspirations, intuitions, fast recognition of images, experiences of feelings, comprehension, love. However:
1. We do not know, that can or AI cannot, yet we will not make it.
2. AI can be dangerous, even if it cannot do H.Naprimer, he can win against us in chess, at a stock exchange, or in any other vital game for us.
3. If there is a certain problem which human can solve only, AI can employ or subordinate to itself people for its decision. For example, the modern state employs scientists, and gives everyone a problem fragment on working out, say, a nuclear bomb.
17. Erroneous representation that AI is impossible because he thinks algorithmically, and human - неалгоритмически
The requirement алгоритмичности at AI creation is not necessary. Genetic algorithms, quantum computers, implantation нейронов in chips and рандомизированные methods do the requirement алгоритмичности conditional. A question on that, human how thinks, is not opened yet. Recently it was possible to learn to play the computer better human in poker (Texas халдом - and after all poker is considered that game in which the intuition is especially important) and it is better to play human a stock exchange (on models). It means, that real people will lose money, facing with computers at a stock exchange or on online tournaments. Probably, that for them the question on, whether possesses the computer consciousness or is the calculator not so is important, as that, how many they have lost. If the computer learns to distinguish images of an external world it can also is effective win disputes, pursue you in wood, shoot on the purposes, do drawings.
It is pleasant to human to think, that it is better (more cleverly, more absolutely etc.) the computer because it has an intuition. However time so it is necessary to concern with suspicion this idea as it can be caused emotions. We cannot build the system of safety on the statement which to us is pleasant. And suddenly we underestimate force of algorithms? Suddenly there is such algorithm which works more powerfully sew intuitions?
18. Erroneous representation that AI will be about same clever, as well as human
There is an erroneous representation that AI will possess approximately human abilities, and in the future the society consisting of people and "robots" is formed. However the set of possible human minds, most likely, is only a small part of set in general possible minds. Therefore it is improbable, that, having reached human level, AI on it will stop. Having increased its speed of work, having connected it with thousand other AI, having added a computer faultlessness and memory, we can strengthen in thousand times AI of human level, not making basic discovery.
19. Erroneous representation that AI will be the employee of human equal in rights with the same possibilities and the rights
Here confuse AI and the separate robot. In the event that its possibilities will infinitely surpass human their "equality" will be strong to the detriment of people. As in any equal competition it will beat people. Besides, it can have representations about equality.
20. Erroneous representation that AI will be much
When we speak «a virus extends on the Internet», we mean one virus though it has many copies. When we speak about the Internet, we mean one Internet. When we speak about the state (being in it) we too mean one state. As also AI will be one though it can have a set of copies and displays. Even if there will be some kinds of AI among them only one will be the main thing. That is we will face not set of separate intelligent cars, and with one system of inhuman scales more likely; examples of such systems are the science, the state and the Internet.
21. Distinctions in understanding of that, actually, is intelligence
Possibly, to make correct definition of intelligence is already almost to create an artificial intellect. From the point of view of safety such definition to give easier: AI is the car, capable to win human in activity any kind (or even: at least in one kind of the activity, which vital for human, thus we mean under activity by management by processes - that is an information work). That is we define AI through its ability to solve practically измеримые problems. We lay aside a question on consciousness, a free will, creativity. This definition basically identically offered Yudkowsky to AI definition as «powerful process of optimisation».
22. An erroneous unequivocal identification of AI with separate object
AI is defined by that it does (effectively carries out optimisation process), however representation that there is an essence generating these actions, can conduct us to errors. For example, evolution process in Darvinian sense generates more and more effective decisions. However this process does not have any centre which puts the purposes or which can be destroyed.
23. Erroneous representation what enough to hide AI in «a black box» that it became safe
If we have placed AI in «a black box» (that is completely isolated it), and then have received results of its work then, there was a bilaterial information interchange, and «the black box» is not that. If we do not receive any information from «a black box», it is equivalent to that in general it not to include. Difficulty here in knowing, that AI has already arisen to understand, that it is time to us to place it in «a black box». At last, AI can crack «a black box» from within, for example, radiating radio signals, or reading out current fluctuations in the power supply system.
24. Erroneous objection of a following sort: «In Japan already there was a project on creation AI in 80th years, and it has failed, therefore AI is impossible»
In 1880th years there were some projects on plane creation, and they have failed. After that the opinion has extended, that the plane is impossible. That is some unsuccessful attempts with unusable means do not mean basic impossibility. Besides, the project in Japan has not decayed up to the end, and other AI-projects simply less advertised have grown from it. However this bright failure has affected as trust of public to such projects, and on propensity of researchers to promise improbable results.
25. Erroneous representation that AI it is necessary to give a command X (to substitute necessary), and all will be as it should be
Command «Х» can be: «to love all people», «not to cause to people of harm», «to obey only me» etc. But we cannot check up, how AI realises any command, yet we will not start it. And when we will start, can be late.
26. Erroneous representation in the spirit of: «When I will reach efficiency in realisation AI, I will think of its safety»
Incorrectly. To check up efficiency AI it is possible, only having started it on a challenge connected with the real world. If from AI left from under the control of safety will think late. Some types of AI can be incompatible with standard safety, for example, based on genetic algorithms. Therefore measures on safety maintenance should be built in AI from the very beginning, they cannot be a makeweight on it. And in all other large projects safety is considered from the very beginning.
27. Erroneous representation in the spirit of: «It is improbable, that our project on creation AI leaves from under the control»
In the world it is a lot of AI-projects and few knowledge of how to measure probability of uncontrollable distribution of AI. It is enough to lose the control over one project. Besides, in a case when the programmer uses strong AI in the purposes, from its point of view it does not look revolt, but from the point of view of other people - is it.
28. Erroneous representation in the spirit of: «We can of what does not care, because AI will solve all our problems»
Among supporters of powerful AI there is an opinion, that some future problems should not be solved, because when there will be powerful AI, it will find the best and more exact decision of these problems. However before start of powerful AI in the real world to us should set it some circle of problems and correctly to formulate, that we want also that we do not want, it is necessary to think well of it in advance.
29. Нетождественность abilities and intentions
See когнитивное distortion in the spirit of «a huge cheese cake» in article Yudkowsky in this books. Its essence that if AI can do something, it does not mean, that it will do it. If AI can bake huge cheese cakes, it does not mean, that the future world will be filled by huge cheese cakes. That is we should not identify motives of actions and ability of AI.
Chapter 6. The specific errors connected by reasonings on risks of use nanotechnologyй
1. Erroneous representation that nanotechnologyи are impossible as it is impossible to create mechanisms to within one atom
It not so, - are fibers who are the most different mechanisms: valves, scissors, моторчиками, - and in them it is important and it is defined by a site of each atom.
2. Erroneous representation that nanofactory и is more safe nanoassembler ов
Nanofactory и are the macroscopical devices making the devices наноразмеров (for example, photolithographic manufacture of microcircuits). Nanoassembler ы are devices наноразмеров, capable to make the copies. By means of one it is possible to make another and on the contrary, that is these devices are functionally isomorphic.
3. Erroneous representation that nanotechnologyи are so far from us in time that it is possible not to think of them
From practical realisation nanotechnologyй we are separated only by missing knowledge. If we had it, we could collect such chain of DNA which, being is started in a bacterium cage, would allow to make дистанционно operated nanoassembler .
4. Erroneous representations in the spirit of «Nanotechnologyи have thought up only for money-laundering»
As such explanation can be applied to everything it explains nothing. Even if someone launders money with the help nanotechnologyй, it does not mean, that nanorobotы are impossible. Crash does not mean a pillbox-komov, that it is impossible to earn money on the Internet.
5. Erroneous representation that nanotechnologyи are connected only with materials technology, мелкодисперсными materials and нанотрубками
Far not all so think, and workings out in area nanorobots are conducted. Intermediate object between nanorobotами and наноматериалами is lithograph of chips which allows to etch any mechanisms from silicon including with mobile parts - technology MEMS (for example, micropendulums for gyroscopes). The basic progress of the law mess goes for the development account nanotechnologyй more and more precision press of semiconductors.
6. Erroneous representation that nanorobotы will be weaker than bacteria, because bacteria had billions years to adapt to environment
It is no more true, than the statement, that «Planes will be more safe than birds because birds developed during millions years». Human achievements usually surpass biological in any one parametre (to the size, speed, speed).
7. Erroneous representation that if nanorobotы were possible, them already would be created by the nature
The nature has not created a wheel, but it is possible and effective. On the other hand the nature has created analogue nanorobots in the form of bacteria which show basic possibility of self-sufficient self-reproduced microscopic devices.
8. Erroneous representation that nanorobotы cannot breed in an environment
If bacteria can breed in the nature can and nanorobotы - after all they can use all receptions accessible to bacteria.
9. Erroneous representation that nanorobots in an environment it will be easy to destroy bomb explosion
For this purpose it is necessary to know precisely where they are. If they have already got into a city to blow up them it will be impossible. After all do not struggle with infectious illnesses by means of bombs.
10. Erroneous representation that nanorobotы will consist only of several atoms that is impossible or малофункционально
The name «наноботы» conditionally also does not mean, that the length нанобота will be equal to several нанометрам. It can be length 1 micrometer and more, is capable to self-reproduction and performance of set of functions. Also it is thus invisible. In this case it will contain billions and even billions atoms.
11. Erroneous representation that nanorobotы will be silly and inefficient as in them it is impossible to place the computer
In any cage of human there is DNA in volume about 500 mbyte from which it is made to one million operations a second. It is enough of it for creation enough the strong computer. It shows us an achievable limit of density of calculations though not necessarily in nanorobotах DNA computers will be used. Nanorobotы can unite in local networks, strengthening the computing productivity repeatedly.
12. E.Dreksler about possible objections of a realizability nanotechnologyй
Further I will result the extensive citation from E.Drekslera, the founder of idea of creation nanorobots in which I will allocate names главок: « Whether will make a principle of uncertainty of quantum physics molecular cars impracticable? Among other things, this principle specifies that it is impossible to define an exact site of a particle during any interval of time. It limits that molecular cars can do, no less than limits that can do something else. Nevertheless, calculations show, that the uncertainty principle imposes few essential restrictions on that, how much easily atoms can be placed on their places, at least, for those purposes which appear here. The uncertainty principle does a site электронов indistinct enough, and actually this vagueness defines the size and structure of atoms. The atom as whole, however, has rather certain site corresponding to rather massive kernel. If atoms did not keep the position rather well, molecules would not exist. The quantum mechanics it is not required to prove these conclusions as molecular cars in a cage show that molecular cars work. Whether will make thermal vibrations of molecules molecular cars disabled or too unreliable that them to use? Thermal fluctuations will cause the big problems, than an uncertainty principle. However and in this case existing molecular cars directly show, that molecular cars can work and at usual temperatures. Despite thermal fluctuations, mechanisms of copying of DNA in some cages do less than one error on 100 000 000 000 operations. To reach such accuracy, however, cages use cars (such as enzyme of DNA-polimeraza I) which check a copy and correct errors. For assemblers it can be quite necessary similar abilities of check and correction of errors if they are intended to give out reliable results. Whether radiation will destroy molecular cars or to do their unsuitable for use? Radiation of high energy can break chemical bonds and destroy molecular cars. Live cages once again show, that decisions exist: they work within years, restoring and replacing the parts damaged by radiation. However as each separate car such tiny, it represents the small purpose for radiation, and radiation seldom gets to it. Nevertheless, if the system наномашин be reliable, it should maintain certain quantity of damages, and the damaged parts should be repaired or replaced regularly. This approach to reliability is well familiar to developers of planes and spaceships. Evolution has not managed to make assemblers. Whether says it what they either are impossible, or are useless? Answering the previous questions, we partly referred to already working molecular cars of cages. They represent the simple and powerful proof of that nature laws allow small groups of atoms to behave as the operated cars, capable to build others наномашины. However in spite of that they in a basis remind ribosomes, assemblers will differ from everything, that is in cages; though they consist in usual movements of molecules and reactions, that they do, will have new results. For example, any cage does not make a diamond fibre. Proofs of a realizability of assemblers and others наномашин can seem proved but why not to wait and to look, whether is valid they can be developed? Pure curiosity seems the sufficient reason to investigate the possibilities opened nanotechnologyей, but there are also stronger reasons. Nanotechnologyи will capture the world in limits from ten till fifty years, that is within terms of a life our own or members of our families. That is more essential, the conclusions of the following chapter prompt, that for the politician we "will wait-will look" there would be the expensive: it would cost millions lives, and, probably, lives on the Earth ».
13. Our propensity to expect grandiose results only from the grandiose reasons
Дрекслер illustrates this error following counterexamples: «the BORING FACT: some electric switches can switch on and off each other. These switches can be made very small and consuming not enough electricity. The GRANDIOSE CONSEQUENCE: if them to connect correctly, these switches form computers, cars of information revolution... The BORING FACT: a mould and bacteria compete for the food, therefore some a mould have learnt to allocate poisons which kill bacteria. The GRANDIOSE CONSEQUENCE: penicillin, a victory over many bacterial diseases, and rescue of millions lives».
14. Erroneous representation that details наномашин will stick together owing to quantum, вандерваальсовых and other forces
But fibers in live cages do not stick together. Offered Дрекслером a realisation variant nanotechnologyй by means of mechanical robots from алмазоида with зубчиками and wheels - not unique. Intermediate variants with псевдобиологичесим the device are possible.
15. Erroneous representation that active nanotechnologyческий the board similar to immune system, will be ideal protection from dangerous nanorobots
Any immune system in a reality, in live organisms, anti-virus in computers, is not absolutely reliable. Besides, there are autoimmune diseases. «Active boards» see more in detail the head.
16. Erroneous representation that Дрекслер - the visionary, and the presents nanotechnologyи consist in something the friend
It was necessary to meet statements from experts in area nanotechnologyй, that nanorobotы Дрекслера are imaginations, and the presents nanotechnologyи consist in detailed measurement of certain very thin parametres малоразмерных structures. However actually these researches are at different levels. Researches Дрекслера concern "design" level. In the same way, as to it the idea to make a nuclear bomb concerned in due time. That is it is wood level, instead of trees. Eric Dreksler - it is far not the unique seer advanced nanotechnologyй, connected with molecular manufacture and nanorobotами. It is possible to name also R.Frejtasa and other employees of the Center responsible nanotechnologyй.
Do'stlaringiz bilan baham: |