By michael c. Horowitz, lauren kahn, and laura resnick samotin



Download 25,02 Kb.
bet3/4
Sana22.07.2022
Hajmi25,02 Kb.
#837004
1   2   3   4
Bog'liq
A Force for the Future

RISKY BUSINESS
Given the stakes, the defense establishment is right to worry about Washington’s torpid pace of defense innovation. But outside the government, many analysts have the opposite fear: if the military moves too quickly as it develops AI weaponry, the world could experience deadly—and perhaps even catastrophic—accidents.
It doesn’t take an expert to see the risks of AI: killer robots have been a staple of pop culture for decades. But science fiction isn’t the best indicator of the actual dangers. Fully autonomous, Terminator-style weapons systems would require high-level machine intelligence, which even optimistic forecasts suggest is more than half a century away. One group of analysts made a movie about “Slaughterbots,” swarms of autonomous systems that could kill on a mass scale. But any government or nonstate actor looking to wreak that level of havoc could accomplish the same task more reliably, and cheaply, using traditional weapons. Instead, the danger of AI stems from deploying algorithmic systems, both on and off the battlefield, in a manner that can lead to accidents, malfunctions, or even unintended escalation. Algorithms are designed to be fast and decisive, which can cause mistakes in situations that call for careful (if quick) consideration. For example, in 2003, an MIM-104 Patriot surface-to-air missile’s automated system misidentified a friendly aircraft as an adversary, and human operators did not correct it, leading to the death by friendly fire of a U.S. F-18 pilot. Research shows that the more cognitively demanding and stressful a situation is, the more likely people are to defer to AI judgments. That means that in a battlefield environment where many military systems are automated, these kinds of accidents could multiply.
Humans, of course, make fatal errors as well, and trusting AI may not seem inherently to be a mistake. But people can be overconfident about the accuracy of machines. In reality, even very good AI algorithms could potentially be more accident-prone than humans. People are capable of considering nuance and context when they are making decisions, whereas AI algorithms are trained to render clear verdicts and work under specific sets of circumstances. If entrusted to launch missiles or employ air defense systems outside their normal operating parameters, AI systems might destructively malfunction and launch unintended strikes. It could then be difficult for the attacking country to convince its opponent that the strikes were a mistake. Depending on the size and scale of the error, the ultimate outcome could be a ballooning conflict.
This has frightening implications. AI-enabled machines are unlikely to ever be given the power to actually launch nuclear attacks, but algorithms could eventually make recommendations to policymakers about whether to launch a weapon in response to an alert from an early warning air defense system. If AI gave the green light, the soldiers supervising and double-checking these machines might not be able to adequately examine their outputs and monitor the machines for potential errors in the input data, especially if the situation was moving extremely quickly. The result could be the inverse of an infamous 1983 incident in which a Soviet air force lieutenant arguably saved the world when, correctly suspecting a false alarm, he decided to override a nuclear launch directive from an automated warning system. That system had mistaken light reflecting off of clouds for an inbound ballistic missile.

Download 25,02 Kb.

Do'stlaringiz bilan baham:
1   2   3   4




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish