26 Feb 2016

The Inevitable Militarization of Artificial Intelligence



2015 proved a watershed year for artificial intelligence (AI) systems. Such advanced computing innovations can power autonomous weapons that can identify and strike hostile targets. AI researchers have expressed serious concerns about the catastrophic consequences of such military applications. DoD policy forbids the use of autonomous weapons for targeting human beings. At the same time, advances in remotely operated weapons like drones have geographically separated decision-makers from their weapons at distances measured in thousands of miles. This paper explores how advances in remotely piloted aircraft alongside evolving cyber threats converge to create considerable incentive to field autonomous weapons. To retain human executive control, military operators rely on communications links with semi-autonomous systems like RPA. As adversaries develop an anti-access/area denial operational approach, they will field new electronic/cyber capabilities to undermine the US military’s technological superiority. The data link between RPA and human beings is vulnerable to disruption. Cyber threats against RPA systems will entice militaries to develop autonomous weapon systems that can accomplish their mission without human supervision.
Introduction to autonomous weapons

In January 2015, Bill Gates observed robotics and artificial intelligence (AI) are entering a period of rapid advances.[1] AI technologies will fundamentally change how humans move and communicate.[2] These innovations enable autonomous systems to perform tasks or functions on their own. For example, Google, Apple, and Microsoft are competing to transform vehicle transport with self-driving vehicles.[3] In manufacturing, autonomous production enables companies to adapt products to diverse consumer markets.[4] AI helps city governments manage critical infrastructure and essential services.[5] In 2015, an AI system leveraged “deep learning” to teach itself chess and achieve master-level proficient in 72 hours.[6] AI “chatbots” power conversations between humans and machines.[7] Companies like Google and Facebook are designing chatbots that make decisions for users about commercial activities like shopping and travel arrangements.[8] Microsoft AI researcher, Eric Horvitz, expects humanity “to be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”[9] Such innovations indelibly impact military affairs.

In its future operating concept, the US Army predicts autonomous or semiautonomous systems will “increase lethality, improve protection, and extend Soldiers’ and units’ reach.”[10] Moreover, the Army expects autonomous systems to antiquate “the need for constant Soldier input required in current systems.”[11] The Army expects AI to augment decision-making on the battlefield. In the not-too-distant future, autonomous weapons will fundamentally change the ways humans fight wars.[12] Over thirty advanced militaries already employ human-supervised autonomous weapons, such as missile defense, counterbattery, and active protection systems.[13] For intelligence, surveillance and reconnaissance (ISR), the US Air Force is developing autonomous systems that collect, process and analyze information and even generate intelligence.[14]

Today, US policy forbids the military from using autonomous weapons to target human beings.[15] DoD policy also mandates humans retain executive control over weapon systems.[16] Nevertheless, AI innovations will soon enable potential autonomous weapons that “once activated, can select and engage targets without further intervention by a human operator.”[17] In July 2015, a group of AI scientist stated autonomous weapons are “feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”[18] Consequently, today’s constraints on autonomous weapons may prove too restrictive as America’s adversaries race forward in the next era of military affairs.
Alarm over autonomous weapons

In 1956, an Army signal corps officer wrote, “Since modern man has changed but little over the centuries, physiologically speaking, the improvement and development of the battle team has centered about new weapons, improved materiel, and better communications – more effective ways to shoot, move, and communicate.”[19] Autonomous systems have already begun changing the ways people move and communicate. Accordingly, some of the world’s top innovators worry AI will dangerously alter the way militaries shoot. In a December 2014 interview, Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.”[20] Notwithstanding arguments against militarizing AI, autonomous weapons will prove too enticing for the world’s militaries.

On December 2015, Elon Musk of Tesla, PayPal, and SpaceX fame publicized a new non-profit called OpenAI. With a billion-dollar endowment, the organization plans to pioneer developments in AI and deep learning.[21] Idealistically, OpenAI hopes “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”[22] In this light, they “believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”[23] The San Francesco non-profit seeks to keep AI from harmful applications.

The announcement of Elon Musk’s OpenAI seemed more remarkable given recent alarm over military applications of AI research. In October 2014, the Massachusetts Institute of Technology (MIT) hosted a symposium about future innovations.[24] During a featured session, Elon Musk warned, “If I were to guess like what our biggest existential threat is, it’s probably [artificial intelligence].”[25] On 28 July 2015, thousands of AI scientists signed an open letter warning about the catastrophic risks from autonomous AI weapons.[26] The scientists expressed:


Unlike nuclear weapons, [AI weapons] require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.[27]

The scientists directly compare AI weapons to nuclear ones. In this way, 2010-2020 is analogous to 1940-1950, the dawn of the atomic age and the nuclear arms race. Admittedly, such analogical reasoning can prove inadequate.[28] Yet, the Manhattan Project and current AI research generate similar ethical debates.

In May 1944, Manhattan Project atomic scientist Niels Bohr wrote a letter to Prime Minister Winston Churchill. The scientist anticipated atomic weapons would fundamentally change human history with “devastating power far beyond any previous possibilities and imagination.”[29] In July 1944, Bohr sent a memorandum to President Franklin Roosevelt expressing concern that atomic weapons would become “a perpetual menace to human security.”[30] On 9 June 1950, Bohr presented a letter to the United Nation recounting the Manhattan Project: “Everyone associated with the atomic energy project was, of course, conscious of the serious problems which would confront humanity once the enterprise was accomplished.”[31] Ultimately, atomic researchers deemed Allied victory paramount.

Like Bohr, AI scientists have implored the international community to preempt the proliferation of autonomous weapons. In their open letter, the signatories call for “a ban on offensive autonomous weapons beyond meaningful human control.”[32] Importantly, the scientists do not call for the elimination of defensive systems.[33] Yet, the scientists fear an AI arms race will extend the battlefield beyond the control of human beings and generate catastrophe.

The Department of Defense (DoD) seemed to share these concerns in a 2012 policy directive. To minimize unintended consequences, then-Deputy Secretary of Defense Ashton Carter ordered “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[34] Moreover, autonomous systems are barred from targeting humans.[35] Under this constraint, defense researchers are developing evermore sophisticated autonomous systems.
The need for speed in cybersecurity

The 2014 Quadrennial Homeland Security Review (QHSR) warned, “Cyber threats are growing and pose ever-greater concern to our critical infrastructure systems as they become increasingly interdependent.”[36] The 2014 QHSR expect innovations in cyber capabilities to enable the Department of Homeland Security (DHS) to collect, analyze, and share information “at machine speed to block threats in milliseconds instead of the hours or days required today.”[37] The DoD identifies similar cybersecurity objectives.

In December 2015, the Defense Advanced Research Projects Agency (DARPA) asked innovators to develop “technologies for detecting and responding to cyber-attacks on U.S. critical infrastructure, especially those parts essential to DoD mission effectiveness.”[38] DARPA seeks technologies that provide “early warning of impending attacks, situation awareness, network isolation and threat characterization in response to a widespread and persistent cyber-attack on the power grid and its dependent systems.”[39] DARPA wants AI systems to reduce the country’s recovery time from catastrophic cyber attacks.[40]

DoD’s 2015 Cyber Strategy states, “If and when DoD detects indications of hostile activity within its networks, DoD has quick-response capabilities to close or mitigate vulnerabilities and secure its networks and systems. Network defense operations on DoD networks constitute the vast majority of DoD’s operations in cyberspace.”[41] Yet, Army officers Rock Stevens and Michael Weigand assess, “The Army does not have a single entity that tracks discovered issues from initial report through the remediation process to ensure vulnerability resolution in a timely manner.”[42] Simply put, cybersecurity remains an evolving enterprise.

In general, cyber defense follows a conceptual process of detect-react-respond.[43] Microsoft promotes a four-phase cybersecurity model: protect, detect, respond, and recover.[44] Intel uses a protect-detect-correct model.[45] In its cybersecurity framework, the National Institute of Standards and Technology (NIST) proposes a five-phase loop: Identify, Protect, Detect, Respond, and Recover.[46] This emphasis on continuous, accurate, timely response resembles other military decision-making areas like artillery fires, counterterrorism, and counterinsurgency.[47]

Militaries must detect, react, and respond to cyber threats faster than an adversary can adapt. Cyber capabilities enable the US military and its adversaries to influence operations across the land, air, maritime, and space domains. DHS and DARPA cybersecurity priorities demonstrate the seriousness of threat adaption. Remotely piloted aircraft (RPA), or military drones, offer one manifestation of this technological struggle.
Drones, cybersecurity, and the future of warfare

At the Smithsonian National Air and Space Museum, tourists can visit the first drone to launch a hellfire air-to-surface missile in combat.[48] After flying reconnaissance missions in the Balkans, the General Atomics manufactured MQ-1L Predator #3034 received military upgrades to launch missiles.[49] Just after 9/11, the RPA began striking Al-Qaeda targets in Afghanistan.[50] Since then, RPA have become a potent symbol for twenty-first century warfare.

Author Richard Whittle explains, “The Predator opened the door to what is now a drone revolution because it changed the way people thought about unmanned aircraft…This is a new era in aviation, and we as a society need to figure out how we’re going to cope with it.”[51] Stanford University’s Amy Zegart writes, “Drones are going to revolutionize how nations and non-state actors threaten the use of violence. First, they will make low-cost, high-credibility threats possible.”[52] She further explained, “Artificial intelligence and autonomous aerial refueling could remove human limitations even more, enabling drones to keep other drones flying and keep the pressure on for as long as victory takes.”[53] Thus, RPA are a critical system for twenty-first century warfare.

In a 2013-2038 operating concept, the Air Force states the next generation of RPA “must be multi-mission capable, adverse weather capable, net-centric, interoperable and must employ appropriate levels of autonomy.”[54] For these missions, cybersecurity is a critical for aerial vehicles. DARPA and Boeing have fielded a new computer language for the unmanned AH-6 “Little Bird” helicopter.[55] Researchers claim the proprietary coding language protects the aircraft against cyberthreats. Similarly, in 2015, Raytheon demonstrated a new cybersecurity system to protect drones from hackers.[56] The Navy is also funding research in “Cyber resiliency for real-time operating systems and the aviation warfare environment.”[57] The future of drones and cybersecurity are intertwined.

Retired Navy captain Mike Walls explains, “A ship-launched cruise missile relies on the ship…[to provide] critical, digital information from its own systems to the cruise missile before launch in order for the missile to hit its target. If either or both of the systems fail, the ship or the cruise missile, then the target is not destroyed.”[58] Reliable communication networks, the ship-to-missile link, also ensures decision makers retain a cognitive interface with their weapon.

In December 2011, Iranian military forces claimed to have electronically “ambushed” a RQ-170 Sentinel by hijacking the RPA’s guidance system.[59] They asserted the military “spoofed” the GPS-signal and tricked the aircraft into landing inside Iran.[60] Although American officials acknowledged the aircraft’s loss, US government sources told journalists the Sentinel drone malfunctioned over Iranian territory.[61] A paper presented at a NATO cybersecurity conference argues either explanation for the incident demonstrates RPA “must be capable of autonomously choosing the right strategy in case of a severe fault to uphold the systems security.”[62] Notably, the authors highlight vulnerabilities in RPA communication links and the ground control systems (GCS).[63]

In a 2012 paper from the American Institute of Aeronautics and Astronautics (AIAA), engineers categorize cybersecurity for unmanned aerial vehicles (UAVs) as Control System Security and Application Logic Security.[64] The authors detail three pathways attackers can use to exploit UAV vulnerabilities: Hardware Attack, Wireless Attack, and Sensor Spoofing.[65] In each of these pathways, adversaries use electronic warfare and cyber capabilities to disrupt RPA operations.

In a 2014 RPA vulnerability analysis, German Army officer André Haider explores the potential threats to RPA. Haider emphasizes, “Current [RPA] systems are not yet fully automated or even autonomous and their control is contingent on uninterrupted communications.”[66] The author assesses, “Possible Electronic Warfare (EW) targets for the adversary include the GCS, RPA, satellites and satellite ground segments.”[67] Haider states NATO networks remain well protected; adversaries face a difficult challenge when attempting to gain entry to RPA systems.[68] Yet, he argues adaptive threats have proven capable of infecting the GCS. The major believes future cyberthreats against certain aspects of RPA systems remain high.[69]

After reviewing a broad range of RPA threats, Haider concludes, “Achieving higher levels of automation is a prerequisite in enabling many of the recommendations made in this study; however, what is technically possible is not necessarily desirable.”[70] In the spirit of Elon Musk and Stephen Hawking, Haider argues against automating targeted strikes.[71] On the other hand, Haider believes “automated weapon release should be approved for any target that is actively engaging the RPA.”[72] In this way, Haider ascribes a military drone similar self-defense mandates given to manned aircraft.

Haider’s cautious recommendation for autonomous RPA reveals an emerging imperative for twenty-first century warfare. Adversaries are developing anti-access/area denial capabilities to defeat America’s technological superiority. Cyber capabilities are an integral part of any anti-access/area denial operational approach.[73] In its 2012 Joint Operational Access Concept (JOAC), the Joint Staff writes, “[M]any future enemies will seek to contest space control and cyberspace superiority as means to denying operational access to U.S. joint forces.”[74] As RPA prove lethal on the battlefield, adversaries will innovate to defeat them.

Under current policy, an armed drone requires robust human control to launch a strike. Thus, the communications link between RPA and the GCS, the link between weapon and decision-maker, is critical for the US military. A cyber or electronic attack to undermine this connection is a superb enemy tactic. Given such a threat, militaries will build redundancy. Thus, AI emerges to the forefront; militaries will develop autonomous RPA that can complete missions even if communication links are disrupted.
Conclusion: Is AI redefining the cognitive dimension?

The US military sees the information environment in physical, informational, and cognitive dimensions.[75] Security experts Peter Singer and Allan Friedman explain cyberspace “is defined as much by the cognitive realm as by the physical or digital.”[76] The cognitive dimension includes the sacred responsibility of a military leader to direct lethal force. Brigadier General Jeff Smith argues the cognitive dimension supersedes the information environment and its subordinate cyberspace domain.[77] BG Smith holds the military leader, the human decision-maker, as the centerpiece for military operations. He contends, “[T]he network is the offspring of the leader, provoked by his requirement to exercise influence over operations.”[78] In other words, information communications technologies cannot supplant a leader’s moral obligations.

BG Smith argues the US military must place “cognitive operation at the core of the operational environment: to make the wisdom, judgment, acumen, imagination, instincts, and mental courage…common across all levels of war”[79] Human dominion over decision-making underwrites BG Smith’s thesis. Yet, AI systems enable autonomous weapons that are isolated from BG Smith’s cognitive operation. Instead of enabling command and control,[80] AI systems can supplant it. Powered by AI, man-made networks, “the offspring of the leader,” will no longer need the supervision of human creators. This cognitive separation alarms many researchers because it will revolutionize warfare in uncertain ways. Despite current hopes to prevent an AI arms race, electronic and cyber warfare capabilities among adversaries appear poised to drive autonomous weapons development. Like Niels Bohr’s 1944 letters to Churchill and FDR, the 2015 open letter of AI scientists may prove ineffectual in the twenty-first century security environment.

Source:cyberdefensereview