Opening Packet – Negative – hss 2017 offcase materials start here



Download 348,88 Kb.
bet19/23
Sana08.09.2017
Hajmi348,88 Kb.
#19852
1   ...   15   16   17   18   19   20   21   22   23

2nc – nano impact

It’s coming fast and lab settings prove it’s feasible


Prado, 13 -- Mark Evan Prado, Retired Physicist in Advance Planning of the Space Program at the Pentagon, “Nanotechnology”, http://www.gainextinction.com/nanotechnology.html

In other words, if there is a chance it may help people look more beautiful, or live longer, or cure their diseases, or make a lot of money in this business, then these desires for personal benefits may make people less inclined to believe other people questioning the safety of nanotechnology, and more inclined to believe the public relations pitches of promoters of this multimillion and multibillion dollar business. You can always find experts to give you a supportive opinion. Often overlooked are concerns that some kinds of random nanotechnology particles in the environment could actually cause accelerated aging and diseases in a variety of ways including as irritants and entering cells and causing chromosomal damage. Things like these have already been experienced in the laboratory as well as in manufacturing environments where nano scale microscopic particles are in the air. As nanotechnology applications gain success in the biotechnology realm, and desires grow for nanotechnology to address diseases as well as provide beauty treatments and other biological enhancements, you can try to imagine where the nanotechnology economy will be in 2020-2025, and thereby where the human species will be going on the risk scale. This will be before we will have self-sufficient space colonies. A wise man once said something like: The greatest fortresses can be still be defeated or undermined by one simple and ages old weapon: money. Now we can also say that it's likely the extinction of the human species will likewise be because of the desire to make money and all the power that money brings.

Multiple scenarios for nano-extinction---form of development determines the impact


Pamlin, 15 -- Dennis Pamlin, Executive Project Manager of the Global Risks Global Challenges Foundation, and Stuart Armstrong, James Martin Research Fellow at the Future of Humanity Institute of the Oxford Martin School at University of Oxford, Global Challenges Foundation, February, http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf

2. Continuing research – into the transformative aspects, not just standard materials scienceis required for nanotechnology to become a viable option for manufacturing. 3. Military nanotechnology research increases the chance that nanotechnology will be used for effective weapons production, and may lead to an arms race. 4. Global coordination allows for regulatory responses, and may mitigate the effect of possible collapse of trade routes. 5. The general mitigation efforts of most relevance to nanotechnology are probably in surveillance and improved international relations. 6. Nanoterrorism is one way in which humanity could lose control of aggressive nanotechnology. 7. Nanotechnology-empowered warfare could spiral out of control, or could lead to the deployment of uncontrolled aggressive nanotechnology. The risk would be acute if small groups were capable of effective nanowarfare on their own. 8. Uncontrolled aggressive nanotechnology is a scenario in which humanity unleashes weapons that it cannot subsequently bring under control, which go on to have independent negative impacts on the world. 9. The direct casualties of an uncontrolled nanotechnology are hard to estimate, as they depend critically on the nature of the nanotechnology, the countermeasures used, and the general technological abilities of the human race after nanotechnology development. The casualties from nanowarfare are similarly hard to determine, as it is unclear what would be the most effective military use of nanoweapons, and whether this would involve high or low casualties (contrast mass nuclear weapons with targeted shutdown of information networks). 10. Disruption of the world political and economic system (exacerbated by the collapse of trade routes or nanowarfare) could lead to further casualties.

Ai – 2nc

Strong risk reduction key to prevent AI-driven extinction---it’s uniquely likely, but success solves every impact


Pamlin, 15 -- Dennis Pamlin, Executive Project Manager of the Global Risks Global Challenges Foundation, and Stuart Armstrong, James Martin Research Fellow at the Future of Humanity Institute of the Oxford Martin School at University of Oxford, Global Challenges Foundation, February, http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf

Despite the uncertainty of when and how AI could be developed, there are reasons to suspect that an AI with human-comparable skills would be a major risk factor. AIs would immediately benefit from improvements to computer speed and any computer research. They could be trained in specific professions and copied at will, thus replacing most human capital in the world, causing potentially great economic disruption. Through their advantages in speed and performance, and through their better integration with standard computer software, they could quickly become extremely intelligent in one or more domains (research, planning, social skills...). If they became skilled at computer research, the recursive self-improvement could generate what is sometime called a “singularity”, 482 but is perhaps better described as an “intelligence explosion”, 483 with the AI’s intelligence increasing very rapidly.484 Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime),485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.486 And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk,488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction. On a more positive note, an intelligence of such power could easily combat most other risks in this report, making extremely intelligent AI into a tool of great positive potential as well.489 Whether such an intelligence is developed safely depends on how much effort is invested in AI safety (“Friendly AI”)490 as opposed to simply building an AI.49

AI outweighs---only risk of extinction, a categorically-distinct impact


Shulman 11Carl Shulman, Research Fellow at the Machine Intelligence Research Institute, “Arms Races and Intelligence Explosions (Extended Abstract),” April, http://singularityhypothesis.blogspot.com/2011/04/arms-races-and-intelligence-explosions.html)

Not only is the arms race dynamic important for the evaluation of many aspects of the singularity hypothesis, it is also an area where existing empirical evidence and theory can be brought to bear from the study of nuclear weapons. This paper discusses some key parameters on which a race to intelligence explosion might differ from the historical race to nuclear explosion: the potential for small differences in research progress to produce massive military disparities in an intelligence explosion, the high risks of accidental catastrophe during research and development, and additional barriers to verification and enforcement of arms control treaties. Collectively, these factors suggest that states would have more to gain from AI control than nuclear control treaties, but would also face greater challenges in coordinating.

II. An AI arms race may be “winner-take-all”

The threat of an AI arms race does not appear to be primarily about the direct application of AI to warfare. While automated combat systems such as drone aircraft have taken on greatly increased roles in recent years (Singer, 2009; Arkin, 2009), they do not greatly disrupt the balance of power between leading militaries: slightly lagging states can use older weapons, including nuclear weapons, to deter or defend against an edge in drone warfare.

Instead, the military impact of an intelligence explosion would seem to lie primarily in the extreme acceleration in the development of new capabilities. A state might launch an AI Manhattan Project to gain a few months or years of sole access to advanced AI systems, and then initiate an intelligence explosion to greatly increase the rate of progress. Even if rivals remain only a few months behind chronologically, they may therefore be left many technological generations behind until their own intelligence explosions. It is much more probable that such a large gap would allow the leading power to safely disarm its nuclear-armed rivals than that any specific technological generation will provide a decisive advantage over the one immediately preceding it.

If states do take AI potential seriously, how likely is it that a government's “in-house” systems will reach the the point of an intelligence explosion months or years before competitors? Historically, there were substantial delays between the the first five nuclear powers tested bombs in 1945, 1949. 1952, 1960, and 1964. The Soviet Union's 1949 test benefited from extensive espionage and infiltration of the Manhattan Project, and Britain's 1952 test reflected formal joint participation in the Manhattan Project.

If the speedup in progress delivered by an intelligence explosion were large, such gaps would allow the leading power to solidify a monopoly on the technology and military power, at much lower cost in resources and loss of life than would have been required for the United States to maintain its nuclear monopoly of 1945-1949. To the extent that states distrust their rivals with such complete power, or wish to exploit itthemselves, there would be strong incentives to vigorously push forward AI research, and to ensure government control over systems capable of producing an intelligence explosion.

In this paper we will discuss factors affecting the feasibility of such a localized intelligence explosion, particularly the balance between internal rates of growth and the diffusion of or exchange of technology, and consider historical analogs including the effects of the Industrial Revolution on military power and nuclear weapons.

III. Accidental risks and negative externalities

A second critical difference between the nuclear and AI cases is in the expected danger of development, as opposed to deployment and use.Manhattan Project scientists did consider the possibility that a nuclear test would unleash a self-sustaining chain reaction in the atmosphere and destroy all human life, conducting informal calculations at the time suggesting that this was extremely improbable. A more formal process conducted after the tests confirmed the earlier analysis (Konopinski, Marvin, & Teller, 1946), although it would not have provided any protection had matters been otherwise. The historical record thus tells us relatively little about the willingness of military and civilian leaders to forsake or delay a decisive military advantage to avert larger risks of global catastrophe.

In contrast, numerous scholars have argued that advanced AI poses a nontrivial risk of catastrophic outcomes, including humane extinction. (Bostrom, 2002; Chalmers, 2010; Friedman, 2008; Hall, 2007; Kurzweil, 2005; Moravec, 1999; Posner, 2004; Rees, 2004; Yudkowsky, 2008). Setting aside anthropomorphic presumptions of rebelliousness, a more rigorous argument (Omohundro, 2007) relies on the instrumental value of such behavior for entities with a wide variety of goals that are easier to achieve with more resources and with adequate defense against attack. Many decision algorithms could thus appear benevolent when in weak positions during safety testing, only to cause great harm when in more powerful positions, e.g. after extensive self-improvement.

Given abundant time and centralized careful efforts to ensure safety, it seems very probable that these risks could be avoided: development paths that seemed to pose a high risk of catastrophe could be relinquished in favor of safer ones. However, the context of an arms race might not permit such caution. A risk of accidental AI disaster would threaten all of humanity, while the benefits of being first to develop AI would be concentrated, creating a collective action problem insofar as tradeoffs between speed and safety existed.

A first-pass analysis suggests a number of such tradeoffs. Providing more computing power would allow AIs to either operate at superhumanly fast timescales or to proliferate very numerous copies. Doing so would greatly accelerate progress, but also render it infeasible for humans to engage in detailed supervision of AI activities. To make decisions on such timescales AI systems would require decision algorithms with very general applicability, making it harder to predict and constrain their behavior. Even obviously risky systems might be embraced for competitive advantage, and the powers with the most optimistic estimates or cavalier attitudes regarding risk would be more likely to take the lead.

IV. Barriers to AI arms control

Could an AI arms race be regulated using international agreements similar to those governing nuclear technology? In some ways, there are much stronger reasons for agreement: the stability of nuclear deterrence, and the protection afforded by existing nuclear powers to their allies, mean that the increased threat of a new nuclear power is not overwhelming. No nuclear weapons have been detonated in anger since 1945. In contrast, simply developing AI capable of producing an intelligence explosion puts all states at risk from the effects of accidental catastrophe, or the military dominance engendered by a localized intelligence explosion.

However, AI is a dual-use technology, with incremental advances in the field offering enormous economic and humanitarian gains that far outweigh near-term drawbacks. Restricting these benefits to reduce the risks of a distant, novel, and unpredictable advance would be very politically challenging. Superhumanly intelligent AI promises even greater rewards: advances in technology that could vastly improve human health, wealth, and welfare while addressing other risks such as climate change. Efforts to outright ban or relinquish AI technology would seem to require strong evidence of very high near-term risks. However, agreements might prove highly beneficial if they could avert an arms race and allow for more controlled AI development with more rigorous safety measures, and sharing of the benefits among all powers.

Such an agreement would face increased problems of verification and enforcement. Where nuclear weapons require rare radioactive materials, large specialized equipment, and other easily identifiable inputs, AI research can proceed with only skilled researchers and computing hardware. Verification of an agreement would require incredibly intrusive monitoring of scientific personnel and computers throughout the territory of participating states. Further, while violations of nuclear arms control agreements can be punished after the fact, a covert intelligence explosion could allow a treaty violator to withstand later sanctions.

These additional challenges might be addressed in light of the increased benefits of agreement, but might also become tractable thanks to early AI systems. If those systems do not themselves cause catastrophe but do provide a decisive advantage to some powers, they might be used to enforce safety regulations thereafter, providing a chance to “go slow” on subsequent steps.

V. Game-theoretic model of an AI arms race

In the full paper, we present a simple game-theoretic model of a risky AI arms race. In this model, the risk of accidental catastrophe depends on the number of competitors, the magnitude of random noise in development times, the exchange rate between risk and development speed, and the strength of preferences for developing safe AI first.

VI. Ethical implications and responses

The above analysis highlights two important possible consequences of advanced AI: a disruptive change in international power relations and a risk of inadvertent disaster.

From an ethical point of view, the accidental risk deserves special attention since it threatens human extinction, not only killing current people but also denying future generations existence. (Matheny, 2007; Bostrom, 2003). While AI systems would outlive humanity, AI systems might lack key features contributing to moral value, such as individual identities, play, love, and happiness (Bostrom, 2005; Yudkowsky, 2008). Extinction risk is a distinctive feature of AI risks: even a catastrophic nuclear war or engineered pandemic that killed billions would still likely allow survivors to eventually rebuild human civilization, while AIs killing billions would likely not leave survivors. (Sandberg & Bostrom, 2008).

2nc – ai timeframe

AI’s coming within a decade and causes extinction---safety planning’s key


Pamlin, 15 -- Dennis Pamlin, Executive Project Manager of the Global Risks Global Challenges Foundation, and Stuart Armstrong, James Martin Research Fellow at the Future of Humanity Institute of the Oxford Martin School at University of Oxford, Global Challenges Foundation, February, http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf

The authors then turned to analysing the AI safety proposals, dividing them into proposals for societal action, external constraints, and internal constraints. They found that many proposals seemed to suffer from serious problems, or to be of limited effectiveness. They concluded by reviewing the proposals they thought most worthy of further study, including AI confinement, Oracle AI, and motivational weaknesses. For the long term, they thought the most promising approaches were value learning (with human-like architecture as a less reliable but possibly easier alternative). Formal verification was valued, whenever it could be implemented. 01-Oct-13: Publication of “Our Final Invention: Artificial Intelligence and the End of the Human Era” by James Barrat, warning of the dangers of AI 519 – Research, In this book, James Barrat argues for the possibility of human-level AI being developed within a decade, based on the current progress in computer intelligence and the large sums invested by governments and corporations into AI research. Once this is achieved, the AI would soon surpass human intelligence, and would develop survival drives similar to humans (a point also made in Omohundro’s “AI drives” thesis).520 The book then imagines the competition between humanity and a cunning, powerful rival, in the form of the AI – a rival, moreover, that may not be “evil” but simply harmful to humanity as a side effect of its goals, or simply through monopolising scarce resources. Along with many interviews of researchers working in the forefront of current AI development, the book further claims that without extraordinarily careful planning,521 powerful “thinking” machines present potentially catastrophic consequences for the human race.



2nc - Regs key

EPA regs that are proactive and flexible prevent dangerous nano but capture its upsides


Reese 13 – Michelle Reese, J.D., 2013, Case Western Reserve University School of Law, “Nanotechnology: Using Co-regulation to Bring Regulation of Modern Technologies into the 21st Century”, Health Matrix: Journal of Law Medicine, 23 Health Matrix 537, Fall, Lexis

Nevertheless, nanotechnology may also present new risks. Scientists are not sure whether nanotechnology poses any serious health hazards to humans or the environment. Considering our wide exposure to nanotechnology, it is critical that we identify potential risks and impose regulations that strike a balance between accessing the benefits of nanotechnology and limiting the foreseeable harm to the environment and public health.

Nanotechnology is the manipulation of matter on an atomic scale to create tiny, functional structures. n3 These structures are incredibly small: one nanometer is precisely one-billionth of a meter. n4 Nanotechnology is defined as the production of materials that are between one and one-hundred nanometers in size. n5 Although they cannot be seen with the naked eye, these microscopic structures called "nanoparticles" have been proven to benefit humans in a variety of ways. For example, they can lead to new medical treatments. n6 They also can be used to develop [*539] building materials with a very high strength-to-weight ratio. n7 Sunscreen and cosmetics that make use of nanoparticles apply more smoothly and evenly to human skin. n8 Other examples of products that utilize nanoparticles include stain-resistant clothing, lightweight golf clubs, bicycles, car bumpers, antimicrobial wound dressings, and synthetic bones. n9

While there are many benefits presented by nanotechnology, there are also potential risks. Studies have indicated that nanoparticles called carbon nanotubes act like asbestos within the human body. n10 Cells that are exposed to nanostructures called "buckyballs" n11 have been shown to undergo slowed or even halted cell division. n12 In general, the small size and high surface-area-to-volume ratio of nanoparticles indicates a higher potential for toxicity. n13



The application of nanotechnology to drug development has aided the treatment of common life-threatening diseases while concurrently posing toxic side effects. n14 For example, carbon nanotubes n15 may be used to enhance cancer treatments, but there is also an indication that the nanotubes themselves might ironically have a carcinogenic effect on the human body. n16 Certain nanoparticles can be used to enhance water filtration systems, but there are concerns that the production of nanoscale products may lead to new types of water pollution. n17 Common [*540] to these examples is the difficulty in determining whether the benefits of nanotechnology will outweigh the risks.

One place to turn for answers is the regulatory agency tasked with investigating the risks posed by nanotechnology. The Environmental Protection Agency (EPA) has the regulatory authority to assess the environmental and public health risks associated with nanotechnology, and to prescribe regulations as needed to prevent or reduce those risks. n18 Unfortunately, authority to assess those risks does not mean the EPA has adequate tools to do so. n19 Nanotechnology is becoming ubiquitous as the industry continues to expand, and new products are being created every day. n20 The need for thorough risk assessment, followed by appropriate risk management, is becoming more important as potential environmental and public exposure to nanoparticles is becoming more common. n21

Nanotechnology is not categorically dangerous. n22 The current danger is that it is unknown whether nanoparticles present any risks to the environment and public health. As more common household products are created or enhanced with nanoparticles, public exposure to nanotechnology is increasing rapidly. n23 This increasing public exposure indicates an urgent need for risk assessment. And as exposure increases, it becomes more important that the EPA be able to determine what risks will accompany that exposure, if any, so that it can properly balance the risks against the benefits and promulgate the most effective rules.

Generally speaking, the EPA is familiar with assessing risks and regulating new products. The EPA has authority through the Toxic Substances Control Act (TSCA) to regulate chemical manufacturing. n24 TSCA requires manufacturers to inform the EPA of the potential risks associated with a new product, or new uses for an existing product, before production begins. n25 This gives the EPA an opportunity to prohibit or limit the manufacturing of that substance. n26 While this seems [*541] to suggest that the EPA is well-equipped to manage the potential risks of products containing nanoparticles, some say that TSCA is outdated and that it will be difficult to use this older statute to regulate modern technology. n27

At: trump turn

Knefel not offense – says it’s unregulated now and doesn’t say Trump regs would be harmful




Nano-AI regulation doesn’t require a NEW agency – EPA, OSHA necessary frontline for emerging tech




Agencies will regulate – bureaucracy


Pazzanese 11-23 [Christina, two decades of experience as a print and digital journalist for both consumer and trade press, 11/23/16, “Trump and the law,” http://news.harvard.edu/gazette/story/2016/11/trump-and-the-law/]

Also, will Trump make good on campaign promises to take a wholesale look at existing regulations and demand agencies toss those guidelines that don’t make sense to him, something the Obama administration was quite aggressive in doing? While tempting politically, there could be difficulties making that strategy work with civil servants who have substantial technical expertise, past experience with such clean-ups, and their own ideas about what should be done, and how.




Download 348,88 Kb.

Do'stlaringiz bilan baham:
1   ...   15   16   17   18   19   20   21   22   23




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish