The Real Value of Artificial Intelligence in Nuclear Command and Control

PR AW

America needs to step it up a notch. We live in a world of constant technological evolution. The Bay Area, specifically, is at the forefront of the nation’s technology creation and development. A five-hour flight to Washington or a quick tune-in to Congressional hearings, however, may leave you pondering whether the two exist in parallel universes. For many years, the government drove innovation; this is simply no longer the case. The Department of Defense knows it must catch up. Why, then, is there not a more reasoned and informed public debate beyond Terminator GIFs and exasperated tweets when it comes to artificial intelligence (AI) and nuclear weapons?

“America Needs a Dead Hand,” a recent War on the Rocks article, got people’s attention. The authors, Adam Lowther and Curtis McGiffin, argued that, in order to modernize and keep its Cold War-era nuclear command, control, and communications (NC3) system credible against increasingly sophisticated adversaries, America should develop and deploy an “automated strategic response system based on artificial intelligence.” They explained that such a move was necessary since “Russian and Chinese nuclear modernization is rapidly compressing the time U.S. leaders will have to detect a nuclear launch, decide on a course of action, and direct a response.”

 

 

Whatever one thinks of their policy recommendations, Lowther and McGiffin’s argument highlights a number of issues that demand greater critical analysis and consideration. First, it signals that there is likely an intense internal debate going on within the U.S. government — one that is strong enough to be spilling into the public domain. Second, artificial intelligence will be integrated into America’s NC3, if for no other reason than that it actually already has been to varying degrees. The question is not whether AI be integrated into NC3, but rather where, to what extent, and at what risk. Third, as legacy NC3 systems age out, unfortunately, the national security community is not equipped to handle the design and execution of the “NextGen” architecture alone. These issues aren’t going away, and they deserve to be taken seriously by the brightest minds in this country.

Experts outside the Beltway, many of whom are concentrated in (but are no means limited to) Silicon Valley, have a critical role to play in the discussion. Private-public partnerships are arguably more important here than in the numerous other debates swirling around tech and security policy these days. The number of experts putting their minds to this set of challenges needs to increase. U.S. adversaries are likely already moving to integrate new types of AI into their own NC3 systems. That does not demand a “dead hand” solution as a response — an automated system that processes indications and warning and is authorized to make launch decisions with humans outside the loop — but it does mean that a response is necessary.

An Overdue Public Conversation

A public conversation about nuclear weapons and artificial intelligence is overdue. To be precise, a debate needs to be held regarding NC3 and cutting-edge machine learning techniques. Lowther and McGriffin’s article is not the only data point indicating that the issues are likely already the subject of a hot internal debate within U.S. national security departments and agencies. There’s broader evidence that confirms this, including public comments regarding integration of AI into the NextGen NC3 architecture made by former USSTRATCOM Commander General Hyten. When asked about AI’s effect on NC3 and nuclear modernization, Hyten said, “I think AI can play an important part.” Hyten’s designated lead for these matters, the director of the recently established USSTRATCOM NC3 Enterprise Center, Elizabeth Durnham Ruiz, has explicitly and publicly stated the need for recruitment and retention of AI experts for the NC3 modernization effort:

We need to be innovative in our approaches while accessing the talent needed to enhance our current workforce and go fast while we partner with academia and industry to establish the pipelines to build a talent workforce for the long term. We need to grow more systems engineers, computer engineers, data scientists, cognitive scientists, and artificial-intelligence and machine-learning scientists and engineers, to name a few.

In addition, the Defense Science Board Task Force on National Leadership Command Capability effort to investigate a variety of areas where novel AI will have an impact on national security demonstrates that the question of integrating artificial intelligence into NC3 is not merely conjecture, but rather an established and ongoing set of deliberations. This debate is happening and it’s important — waving it off as a Skynet Hollywood delusion is missing the most critical point.

AI, NC3, and Closing the Gap

Various types of AI have long been integrated into American and Russian NC3. During the Cold War, both countries incorporated what was then accepted as cutting-edge AI tools to enable automation into their detection and early warning systems of systems. In addition, both embarked on research that aimed to integrate greater levels of sophisticated AI-enabled automation into their systems.

In Russia, the result was the Perimetr project, “an automatic system of signal rockets used to beam radio messages to launch nuclear missiles if other means of communication were knocked out.” The dead hand system was not as ominous as it sounds. Humans were to be fully involved, no matter what. The dead hand system, most likely utilized within the Perimetr project, simply used a machine to combine Soviet command and control changes and present these changes to the human that was ultimately in charge of the nuclear button. Washington took a similar approach. Its Emergency Rocket Communications system was similar to Perimetr, but “it was never combined into a system analogous to the dead hand out of fear of accidents.”

Concerns about gaps in America’s NC3 capabilities need to be taken seriously. Our recent research and collaborative efforts have asked the question of how novel artificial intelligence techniques — namely deep learning — may be integrated into the vast NC3 enterprise and potentially address some of these risks. Some may consider AI-enabled automation a kind of dead hand — the inference being that humans turn over critical decision making tasks to machines that then just flip yes or no based on learned indications and warning. But in fact, the preponderance of novel artificial intelligence techniques fall under the machine-learning umbrella — and are more often than not based on deep learning. These statistical methods (techniques of machine learning that are growing in popularity due to their broad applications) in fact do not appear dead at all, but rather constantly learn from themselves through the deep layers of neural nets.

Most importantly, a critical analytical question we have asked is where exactly within the NC3 architecture do these novel deep learning techniques provide the positive changes that many are looking to achieve? Just how many elements of the vast NC3 enterprise would actually be improved by integration of these new AI tools? Upon closer investigation, those systems — if they are to take advantage of deep learning approach — would reside well short of the creation of a Skynet environment in which humans no longer make nuclear launch decisions. It must be asked, however, can such advances be integrated, and the necessary levels of trust be built in, through rigorous verification and validation? Where would it introduce unacceptable levels of risk? Asking these questions — while keeping at the forefront of mind that the integration of deep learning techniques would inevitably also stimulate various novel opportunities for attack from our adversaries — is of the utmost importance. This is about the command of global nuclear weapons; do-overs are not acceptable. “Fail fast, fail often” is not applicable here.

We are convinced that time is most usefully spent debating the technical positives and negatives of such integration in a manner that does not simply classify perspectives on the discussion as “that’s crazy” or “just don’t,” or as vaguely as stating that there is a need for an “automated strategic response system based on artificial intelligence.” Anyone who knows about AI and these matters should weigh in with their insights regarding AI safety and security to shape the world that is coming. The experts on the inside could use the assist.

Despite the opening of discussions between policy and AI experts, some convened by Tech4GS — a Bay Area think tank and accelerator that both of us represent — considerably more work needs to be done to bring thought leaders from these communities into a deeper analytical discussion. Tech experts in San Francisco, Silicon Valley, and elsewhere bring unique insights into such a debate. While it is not entirely a technical discussion (the establishment of norms is paramount), public-private partnerships in this space must be prioritized and expanded. General Hyten, for one, has made clear the need to prioritize these approaches by issuing broad calls for collaboration with the private sector. Unlike in other technology domains, it also remains unclear that the private sector in the United States will end up as the global industry lead when it comes to AI. While the United States currently leads in AI innovation, rivals like China are making rapid progress.

America’s NC3 “System of Systems” and Machine Learning

We believe it would be prudent to explore the viability of integrating various deep learning techniques into the 21st-century “system of systems” NC3 architecture. U.S. NC3 will be modernized in the coming decade, there is no question about it. As the ways in which adversaries engage in warfare shift, there is technology available that could potentially bring game-changing advances a wide range of NC3 related areas — to include obvious ones like data analytics and decision-support systems. We are also concerned, however, with what vulnerabilities may be inadvertently introduced — and whether these are truly ready for use.

America’s NC3 system is complex. Various analyses have indicated a wide breadth of estimations as to the number of sub-systems that comprise NC3, ranging from 107 to 240. That makes our analytical task challenging — as do justifiable classification issues.  We have been able to systematically identify about 99 of those 200-plus systems via unclassified, open-source resources. Of those 99 systems, through interviews with both AI/machine learning and NC3 experts, we determined that about 39 percent of these could be plausible candidates for deep learning integration during NextGen NC3 modernization efforts. Again – the domains that could see advantages could potentially include systems responsible for frequency modulation, signal processing, and voice and data communications. These include NC3 systems such as the Modified Miniature Receiver Terminal (MMRT), systems within the Common Submarine Radio Room (CSRR), and in the Fixed Satellite Service (FSS). To be clear: none of the designers of next-generation architecture that we’ve spoken with have told us they intend to integrate deep learning into these systems. However, based on the top-line assessments of how many NC3 systems there are in the U.S. NC3 enterprise, we could be talking about 93-plus NC3 systems that could see the integration of deep learning into their software and hardware. Those 93-plus systems are not even inclusive of space systems — those conducting the analysis we are privy to in the unclassified space ve noted that space systems are not usually included. Many of those space systems are part of the NC3 enterprise, however, and also are prime candidates for the integration of deep learning techniques. While in sum this does not amount to a “strategic automated response system,” it does point to the potential inclusion a tremendous amount of deep-learning based AI.

Should these 93-plus systems have deep learning integrated into their hardware/software, and, if so, which ones? What would be the consequences, and what new types of vulnerabilities would be introduced? What would be the rewards? What is the potential for accrued risk within a large stack of subsystems — (i.e., what is the compounded error rate and accordant risk, that is created when of an increasing number of subsystems relying on vulnerable deep learning techniques to provide information up the stack)? What does this mean for “dual phenomenology”? Is it possible to test for those risks, or will the larger, complex systems within NC3 simply have emergent properties that will need to be accounted for as they arise? Can vulnerabilities be addressed through rigorous verification and validation practices, or will officials be prevented from doing so due to inexplicable ways by which deep learning systems arise at their conclusions?

While our research into AI and NC3 is ongoing, we are actively searching for answers to these questions by cultivating Silicon Valley and Beltway insight. If technology can be integrated, there is likelihood that it will be — that is how the defense contracting world keeps its bread buttered. One should expect the same in these critical domains. As a result, diversifying participants in the discussion on the topic of U.S. NC3, as well as knowledge of deep learning techniques is our goal.

Conclusion

The various novel artificial intelligence techniques likely under consideration to improve America’s NC3 systems, while certainly entailing risk, also hold the possibility of generous rewards. This includes, but is not limited to: identifications of false positives/negatives, various types of data and processing and analysis efficiencies and thus increased productivity, drawing previously inconceivable connections, enhancing decision aids, and identifying anomalous activity, among others.

However, great care should be taken in considering the degree to which deep learning is integrated into future NC3 systems, including where exactly within the broad enterprise novel AI techniques induce the least amount of critical — or unacceptable — risk. Deep-learning experts should take the opportunity to weigh into what appears to be an intense internal debate on the way in which novel techniques may be integrated into 21st-century nuclear command, control, and communications. As the discussion of how to shape the world that is coming moves forward, the discussion must switch to the acceptance that in some ways, we are already there.

 

 

Philip Reiner is the Chief Executive Officer of Bay Area-based Technology for Global Security. He is a former Pentagon civil servant that worked for almost a decade in the Office of the Under Secretary of Defense for Policy, concluding his federal service with four years detailed to the National Security Council staff. His last role in federal service was as President Obama’s Senior Director for South Asia. He is a proud member of the SAIS mafia and citizen of Oakland, CA, with his wife and two little girls.

Alexa Wehsener is a Research Analyst at Technology for Global Security. She currently focuses most of her work on U.S. NC3. Alexa has worked at the National Consortium for the Study of Terrorism and Responses to Terrorism (START) as well as contributed to work with UC Berkeley’s Nuclear Policy Working Group.

Image: U.S. Navy (Photo by Mass Communication Specialist 2nd Class Nathan K. Serpico)