MICHAEL C. HOROWITZ is Richard Perry Professor and Director of Perry World House at the University of Pennsylvania and Senior Fellow for Defense Technology and Innovation at the Council on Foreign Relations.
LAUREN KAHN is a Research Fellow focusing on defense innovation and emerging technologies at the Council on Foreign Relations.
LAURA RESNICK SAMOTIN is Postdoctoral Research Scholar in National Security and Intelligence Studies at the Arnold A. Saltzman Institute of War and Peace Studies at Columbia University and a Nonresident Senior Fellow with the Atlantic Council’s New American Engagement Initiative.
Gunpowder. The combustion engine. The airplane. These are just some of the technologies that have forever changed the face of warfare. Now, the world is experiencing another transformation that could redefine military strength: the development of artificial intelligence (AI).
Merging AI with warfare may sound like science fiction, but AI is at the center of nearly all advances in defense technology today. It will shape how militaries recruit and train soldiers, how they deploy forces, and how they fight. China, Germany, Israel, and the United States have all used AI to create real-time visualizations of active battlefields. Russia has deployed AI to make deepfake videos and spread disinformation about its invasion of Ukraine. As the Russian-Ukrainian war continues, both parties could use algorithms to analyze large swaths of open-source data coming from social media and the battlefield, allowing them to better calibrate their attacks.
The United States is the world’s preeminent technological powerhouse, and in theory, the rise of AI presents the U.S. military with huge opportunities. But as of now, it is posing risks. Leading militaries often grow overconfident in their ability to win future wars, and there are signs that the U.S. Department of Defense could be falling victim to complacency. Although senior U.S. defense leaders have spent decades talking up the importance of emerging technologies, including AI and autonomous systems, action on the ground has been painfully slow. For example, the U.S. Air Force and the U.S. Navy joined forces starting in 2003 to create the X-45 and X-47A prototypes: semiautonomous, stealthy uncrewed aircraft capable of conducting surveillance and military strikes. But many military leaders viewed them as threats to the F-35 fighter jet, and the air force dropped out of the program. The navy then funded an even more impressive prototype, the X-47B, able to fly as precisely as human-piloted craft. But the navy, too, saw the prototypes as threats to crewed planes and eventually backed away, instead moving forward with an unarmed, uncrewed aircraft with far more limited capabilities.
The United States’ slow action stands in stark contrast to the behavior of China, Washington’s most powerful geopolitical rival. Over the last few years, China has invested roughly the same amount as the United States has in AI research and development, but it is more aggressively integrating the technology into its military strategy, planning, and systems—potentially to defeat the United States in a future war. It has developed an advanced, semiautonomous weaponized drone that it is integrating into its forces—unlike how Washington dropped the X-45, the X-47A, and the X-47B. Russia is also developing AI-enabled military technology that could threaten opposing forces and critical infrastructure (so far absent from its campaign against Ukraine). Unless Washington does more to integrate AI into its military, it may find itself outgunned.
But although falling behind on AI could jeopardize U.S. power, speeding ahead is not without risks. There are analysts and developers who fear that AI advancements could lead to serious accidents, including algorithmic malfunctions that could cause civilian casualties on the battlefield. There are experts who have even suggested that incorporating machine intelligence into nuclear command and control could make nuclear accidents more likely. This is unlikely—most nuclear powers seem to recognize the danger of mixing AI with launch systems—and right now, Washington’s biggest concern should be that it is moving too slowly. But some of the world’s leading researchers believe that the Defense Department is ignoring safety and reliability issues associated with AI, and the Pentagon must take their concerns seriously. Successfully capitalizing on AI requires the U.S. military to innovate at a pace that is both safe and fast, a task far easier said than done.
The Biden administration is taking positive steps toward this goal. It created the National Artificial Intelligence Research Resource Task Force, which is charged with spreading access to research tools that will help promote AI innovation for both the military and the overall economy. It has also created the position of chief digital and artificial intelligence officer in the Department of Defense; that officer will be tasked with ensuring that the Pentagon scales up and expedites its AI efforts.
But if the White House wants to move with responsible speed, it must take further measures. Washington will need to focus on making sure researchers have access to better—and more—Department of Defense data, which will fuel effective algorithms. The Pentagon must reorganize itself so that its agencies can easily collaborate and share their findings. It should also create incentives to attract more STEM talent, and it must make sure its personnel know they won’t be penalized if their experiments fail. At the same time, the Department of Defense should run successful projects through a gauntlet of rigorous safety testing before it implements them. That way, the United States can rapidly develop a panoply of new AI tools without worrying that they will create needless danger.