Small Wars Journal

The Importance of History in Technology and Security Policy Analysis

Tue, 12/01/2015 - 12:17pm

The Importance of History in Technology and Security Policy Analysis

Adam Elkus

While many people in military, law enforcement, security, and intelligence professions have always thought of their superiors as overly rote and mechanical, they may very well live to see their boss actually become a machine. In various domains of security and conflict, analysts increasingly predict that machines will take on progressively larger components of human reasoning and decision-making. [1] It is tempting to argue that such technologies have little real precedent and that they will fundamentally revolutionize security and conflict. [2] Certainly disruptive shifts in the basic character of security and conflict have occurred. In particular, immense sociotechnical changes that occurred from the late 1700s to 1945 were perhaps some of the most profound and wide-ranging in human history. [3] That said, what predictions about both technology and security have in common is that they are often embarrassingly wrong. [4]

Nonetheless, futurism tends to be one of the more dominant approaches to technology and security policy analysis. While looking into a crystal ball certainly is useful, analysts should also consult their history books. History can be useful for policymakers dealing with emerging technologies and security problems. Yes, understanding the policy implications of even highly advanced computer technology can be aided by the study of history. History is not destiny, and policymakers who do not carefully appraise its potential utility also certainly can misuse history. [5] But very few complex problems are completely new. And examining the ways in which even highly science fiction-like technologies and their attendant security problems arise from precedents set decades back can help analysts better grasp the their problems and potentials today.

This article begins by describing the mechanization of decision-making and some of the concerns that it raises. Afterwards, the article provides an extended case study in past debates over limited rationality in both humans and machines. The article explains how concerns over rationality and decision-making helped fuel a wave of cognitive science and artificial intelligence research in strategy game play, and how conceptions of rationality and intelligence in chess and go differ. Lessons for policy analysis of computers and decision problems are briefly summarized in the concluding paragraphs.

Mechanizing Decision-Making: Trust and Risk

It is often argued that today’s problems with technology and security are fundamentally revolutionary in nature. [6] Closer analysis often reveals problems with sweeping calls to rewrite everything we know about security due to the emergence of particular technologies and allegedly novel threats to security. [7] Additionally, technological futurists often assume that the technologies and concerns they write about are completely new. Historians of technology, however, have traced the origins of many of today’s most advanced technologies (and the concerns that arise from them) to processes that began decades or even centuries ago. [8] Nonetheless, as Arthur C. Clarke once said, technology appears to many as a form of magic rather than a tool engineered by humans. [9] If technology is thought of as magical in general, computers are commonly perceived to be a form of black magic. It does not take long, for example, for understandable concerns about cybersecurity and cyberconflict to morph into hilariously apocalyptic warnings of cyber-doom.  [10] The converse also holds true. Decision-makers often have highly unrealistic expectations about what computers can do in security situations. [11]

Much of these analytical problems stem from historical ignorance. As naval historian B.J Armstrong has argued, analysts neglect to consult the long-run history of unmanned systems when penning their latest thinkpiece. [12] In general, many futurists are almost comically ignorant of the ways in which computing, artificial intelligence, and robotics have historically been highly intertwined with security affairs. For example, how many big data boosters know that the Metropolis-Hastings algorithm, one of the most prominent statistical techniques in a data scientist’s toolbox, owes its origin to work on the atomic bomb? [13]  Do many analysts writing about the supposed security novelties of artificial intelligence grasp how much squabbles over Pentagon money have shaped how cognitive science and artificial intelligence have evolved as fields of study over many decades? [14] Even relatively recent events, such the role of automation in the USS Vincennes-Iran Air Flight 655 disaster, have been forgotten. [15] That is a shame, as incidents like the Flight 655 disaster have much to teach us about the risks of automating core decision functions. [16]

The accurate observation that futurist concerns and policy analyses about emerging technologies and security problems are often historically illiterate, however, dos not negate the seriousness of the problems they seek to solve. In numerous domains, policymakers face questions of how much human decision-making should be augmented or replaced by humans. One need only look at debates over the future economic impact of emerging computing technology to a get a sense of how much it might change. [17] Much of future human economic activity could very well consist of programs doing business with other.[18] This “second economy” would involve humans as the principals, rather than the agents, of economic activity. [19] Fleets of coordinated robotic vehicles may replace human traffic as multi-robot systems become more advanced and robust. [20]

Of course, technology brings progress but also problems; excitement over change tends to yield to grim acknowledgement of the complications and risks that inevitably follow. [21] Security policymakers are often more aware of this due to the fact that the same technology which helps a cop catch a criminal often makes it that much easier for the criminal to escape justice. Nonetheless, government and private sector entities with security responsibilities have often been early adopters of the most advanced emerging technologies and play a key role in bringing them about. [22] Law enforcement, military, cyber, intelligence, and private sector security communities in particular will likely substantially augment or replace human decision-making functions with computers in the near future.[23] Many of these communities already use (and have always used) computing to help them cope with adversarial situations and security management. But as advances in computing continue, more and more components of human decision-making will likely be mechanized.

While much of this trend towards computerizing decision-making may sound like science fiction, chances are you may already have experienced its effects. And if you are a Los Angeles area commuter looking to avoid paying a public transit fare, you might have even experienced it firsthand. Take, for example, the partnership between the University of Southern California’s Teamcore Group and Los Angeles law enforcement agencies. [24] Many law enforcement and homeland security problems involve the scheduling of operations and the allocation of scarce resources. Police patrolling is a key example of this problem.[25] Transit security is difficult because stopping even simple crimes such as fare evasion requires finding the most efficient way to use scarce police resources to patrol sprawling metropolitan transit systems. [26] Frequent Small Wars Journal contributor John P. Sullivan and others were part of a project to help use an artificial intelligence system to augment existing law enforcement transportation security. [27] The system scheduled randomized police patrols using game theory. [28] Critically, the system was also robust against the everyday interruptions of planned activities that are commonplace in law enforcement. [29]

The Los Angeles transit example is a useful case study of mechanizing decision-making for the following reasons:

  1. The system was designed to perform a component of what used to be a core police function: operational planning and decision-making. As Sullivan and Sid Heal have written planning and decision-making are some of the hardest challenges in law enforcement and security. Sullivan has focused on the complexities of information, intelligence, and operations in emergency scenarios. [30] Heal has written broadly about the complexities of planning large-scale emergency operations. [31] Hence, the system has mechanized a key part of law enforcement planning and decision.
  2. The system is good at something humans are not always good at: randomization. In adversarial situations, randomizing your behavior is important to avoid being exploited by the opponent. However, humans are not random in how they make decisions. [32] A computer faces no such restrictions, and randomized algorithms are a big topic of study in computer science. [33] The system Sullivan and others developed and tested schedules randomized patrols. [34]

In contrast to sci-fi visions, uses cases such as public transit security are mundane and practical realizations of emerging intelligent systems technologies. Most, if they were not directly told that the Los Angeles county law enforcement establishment was using an AI to optimize transit security, would have never known about it in the first place. And as demands for pervasive surveillance rise after the Paris attacks, systems like these will become more commonplace. The only way that policymakers can reconcile public demands for security with the limited pool of analysts, officers, and operators available is to either increase the amount of resources allocated to domestic security or use technology to make what policymakers already have work better. Given the reality of budget shortfalls in many industrialized Western states, the latter is much more likely than the former.

What To Think About Machines That Think

What’s the downside of mechanizing decision-making? Plenty, as it turns out. As with any complex technical system involved in security matters the increasingly mechanical nature of security decision-making poses risks as well as opportunities. Scholars of technology and automation are well aware of the unintended consequences of automation on effective decision-making and choice. [35] A bigger problem lies in questions of trust and understanding.  As historian Donald MacKenzie argues, the mechanization of reasoning and decision-making inherently raises questions of trust. How do we know that the computers that we depend on are dependable? [36] An added layer of complexity is that these computers are performing human economic functions by simulating – with varying degrees of fidelity – the reasoning processes of humans they are meant to replace.

All of this raises the unavoidable question of how we can think about machine reasoning despite it being different from our own in almost every way. The answer – and the problem--- can partly be found in something called the intentional stance. Most of the time we think about complex computing systems as if they were other people. We use language such as “the computer won’t work,” implying that our malfunctioning MacBook is a recalcitrant laborer that has gone on strike. In short, we attribute life-like intentions to lifeless tools. One reason we do this is that we often prefer humanizing what is simply layers of mechanical hardware, software, and networks rather than deal forthrightly with the occasionally alienating nature of information technology. [37]

However, it also may just be human nature to attribute mind, purpose, and intent to everything we interact with. We have the ability to ascribe mentalistic attributes such as intentions to others and this capability emerges at an early age. [38] But what does it mean to do this? Some philosophers and cognitive scientists believe that when we try to predict the behavior of other people, we rely on a folk psychological construct called beliefs, desires, and intentions. Beliefs denote what someone else knows about the world, desires what they want to do, and intentions denote desires that they are committed to realizing through action. [39]

Some might even argue that we internally simulate others’ minds when we use folk psychology to think about what they are thinking and what they might do. [40] In short, we treat other people as rational agents, where rationality is defined vaguely as the ability to make actions at that are in its own best interests. [41] Because computer programs differ widely in the way they make decisions compared to humans, this conception of rationality is often used as a way of modeling and predicting the behavior of complex artificial intelligence and robotics systems. We treat artificial agents as rational for lack of a better way to understand them. [42]

Rationality need not entail omniscience. Rationality can be limited for a variety of reasons while still being rational. [43] Finally, while intelligence and rationality are distinct concepts, they are also nonetheless closely related. [44] Hence we often automatically make the assumption that the technical system we are dealing with is a rational agent and ascribe human characteristics such as beliefs, desires, and intentions to it. [45] Only its designer likely will fully understand how it works, and even then the designer will likely seek to predict is behavior based on such crude and seemingly unscientific psychological attributions. [46]

One of the biggest problems in understanding the policy implications of mechanizing decision-making is what kind of rational agent we expect our mechanical proxies to be. There is little consensus as to what rationality means. Rationality could refer to the appropriateness of a thought or choice, but it also might refer to the appropriateness of the process that produced a decision. Even more confusingly, rationality might refer to whether the choice yielded an outcome consistent with someone’s preferences or improved their evolutionary fitness. [47]

To most audiences, “rationality” colloquially refers to the appropriateness of an action. But the problem is that rationality’s meaning is much more complex. To see how problematic this might be for the regulation of artificial intelligence programs and policy analysis concerning them, consider that all intelligent systems – biological or artificial – are constrained in how they search for good solutions. It is physically impossible for them to quite possibly consider every option or outcome, and they are biased in various ways toward certain regions of the search space they explore when they ponder what to do next. [48] Hence, a definition of rationality that ignores the process of selecting actions and how this process might reflect the environment the system was designed to be successful in is likely not going to be useful for policymakers. [49]

In general, assumptions about rationality and action selection in synthetic intelligence represent a philosophical, scientific, technical, and above all else legal minefield that policy analysts will nonetheless be forced to traverse as intelligent systems become more prominent in security concerns. [50] However, history can also help us understand the terrain and landscape that must be traversed, even if it is unlikely to reveal all of the explosive traps lurking underneath.  Evidence for this proposition can be seen in Cold War efforts to think about limitations on human and machine decision-making, which will be explored in the following section.

When we examine the historical record, we can see a historical confluence between problems of mechanizing decision-making and debates over intelligence and rationality.  As historians such as Margaret Boden have observed, the trend towards the mechanization of decision and the view of decision as mechanical has in some ways been in progress for centuries. [51] In order to create a descriptive language that allows us to understand how mechanical creations do things we do, we must have first regarded ourselves as mechanical in some shape or form. Alan Turing, for example, famously analogized a program to a human doing out math problems on a scratchpad of paper. [52]

While much of these questions lie at the core of Western science and philosophy, the harsh and unforgiving nature of Cold War demands on political and military decision-making accelerated the rise of attempts to both create mechanical minds and understand rationality and decision behavior via mechanical metaphors about human mind and behavior. [53] We can learn much from both the origins and long-term effects of these demands. History does not necessarily yield any easy answers. However, it may provide useful questions. The benefit of looking at the history of science’s quest for mechanical minds (and to explain human minds as mechanical) that we can see how rationality and intelligence has been formalized completely differently depending on what kind of problem, task, and environment is being used to think about it. [54]

Cold War Rationality and Its Discontents

In the Cold War, a new understanding of strategy emerged. It focused less on the appropriateness of the choices made by the President or general and more on the process by which the decision-maker made choices. [55] This image of strategy arose from a problem rooted in the military, intelligence, and security problems of the day: how limited information processing power of both humans and machines could be reconciled with the need for both to perform well under stress. Cold War computers were dramatically limited in computing resources compared to today’s ultra-powerful and fast systems, hence their users had to carefully economize on how they used computing to solve problems. [56] Similarly, applied research in air defense problems and other similar endeavors inspired a new view of human information processing limits in decision-making. [57]

Popular conceptions of Cold War approaches to rationality often envision a group of Dr. Strangelove-like nuclear scientists blathering on about mineshaft gaps while the world awaits destruction. [58] This view of cold and broadly robotic perceptions of rationality and behavior is not without some foundation. [59] However it is also a caricature. Contrary to popular stereotypes, there was no one monolithic Cold War understanding of rationality. As Lawrence Freedman observed in a recent survey of Cold War intellectual history, there was no one conception of rational behavior that dominated during that era. Instead scientists, policymakers, and others often disagreed about what rationality was and what aspects of rationality were important to study. [60]

Why did researchers disagree so much about how to study rationality? The answer partly lies in a larger trend – the movement towards the mechanization of mind in scientific research. Researchers in cognitive science, artificial intelligence, and other similar sciences came to regard machines as capable of replicating human behavior and human behavior as a kind of mechanical process. [61] As many historians of the time observe, much of this research was built on a particular set of metaphors for describing both individual and group behavior. These imaginative metaphors often conceptualized mind and behavior in terms of information, computation, feedback, control, communication, and other broadly machine-like conceptions of mankind. [62]

Because rationality could serve as an explanation of how both biological and machine intelligence made decisions in support of larger goals and purposes, rationality was a bridge between often highly complex and diffuse research concerns. [63]A key difference between different beliefs about rationality, however, may simply have been different choices about how to study rationality. Famous economist and strategist Thomas Schelling, for example, constructed his beliefs about strategy and rationality from a combination of history, psychology, and abstract mathematical thought experiments.[64] In contrast, other researchers utilized laboratory experiments and contrived decision problems to generate knowledge about strategic decision-making. [65] Because of this, researchers unsurprisingly arrived at different images of rational behavior.

The Cold War research program, however, was far broader than just rationality. Cold War politics fueled research efforts devoted broadly to understanding human cognition and intelligence. [66] Fears about economic competition with Japan as well as a desire to solve pressing late Cold War military problems also motivated substantial investment in a failed project to solve – once and for all – the problem of building true artificial intelligence. [67] While the effort was unsuccessful, it reflected a larger, longstanding military interest in mechanizing decision-making and building cooperative human-machine systems for security and defense. [68]

In general, the problem that really interested policymakers – crisis decision-making at varying levels of political and military organization in an emergency scenario featuring the Soviet Union and other Eastern bloc states – could not be studied directly. The past provided some clues, but as Bernard Brodie observed in 1949, for every nugget of wisdom there were ten fallacious clichés about the allegedly “timeless” relevance of some military shibboleth. [69] Contemporary confrontations and crisis situations provided natural experiments, but as we know today, much of what researchers believed about these episodes at the time was faulty at best and completely wrong at worst. [70] Hence researchers often turned to games as environments for studying rationality and other related mental and behavioral properties.

Strategy Games, Rationality, and Intelligence

The emergence of both mechanical images of rationality and the mechanization of strategy and decision-making in conflict can be traced to a particular kind of shared primordial soup: the war game. [71] However, as evidenced by the fact that the Prusso-German kriegspiel game is a variant of the older strategy game chess, the conception of modeling strategy via tabletop game did not start with the military. Chess and its variants and precursors have been played for centuries, and have arguably played a key role in creating a basic understanding of strategy as the art of an general issuing commands to proxies on a top-down map. [72]

Research in artificial intelligence, cognitive science, and the playing of strategy games is an interesting case of how perceptions of intelligence and rationality may diverge even within the same type of environment used to study it experimentally. Games have often been used to develop AI agents by providing a specific task environment to develop their behavior within. More broadly, games have been useful to cognitive scientists for studying human expertise and intelligence within a limited domain that is nonetheless complex and potentially representative of higher-order intelligence. [73]

Strategy games, believed to be the pinnacle of intelligent human reason and behavior, were used as a laboratory for studying basic cognitive mechanisms in perception, memory, and problem-solving. Strategy games were also used to study search, learning, and other fundamental artificial intelligence mechanisms in computational agents. [74] Methods in both disciplines utilized ranged from simulations of human cognitive processes in play to the programming of complete game-playing programs. Research in chess is the most famous example of this broad research program, but many others certainly exist. [75]

Researchers did not just study chess and other games to unlock the mysteries of human intelligence. They also did so as part of the larger research effort to unlock the mysteries of human and machine rationality. Researchers such as Herbert Simon held a strong conviction that both humans and computers were alike in that only procedural rationality– the study of the process by which agents made decisions – could explain their problem solving approaches. Researchers involved in these efforts claimed that results could generalize to the study of bureaucratic and even national decision-making. [76] Given that chess-playing systems and game theory models in social science shared prominent similarities (such as the minimax algorithm for zero-sum game playing), this was not an unreasonable assumption at the time. [77]

However, chess was not the only strategy game domain in which researchers pondered the nature of intelligence and rationality. The chief problem of chess is the enormous amount of possible moves that must be considered. Hence procedurally rational systems that play chess represent theories of how to simulate or at a minimum functionally replicate the manner in which human experts efficiently store knowledge about the game and search for good moves. Because experts acquire heuristic knowledge (rules of thumb and shortcuts), they only need to consider the most useful options for decisions, not the entire set of options. But this has much to do with the specific characteristics of chess as a strategy game. Other games are different. [78]

The study of computer go stands as an interesting contrast to this conception of procedural rationality. Chess is seen as a stereotypically Western approach to viewing strategy while go is allegedly more representative of Eastern ways in strategic behavior. More importantly, go is still fundamentally different than chess. [79] Algorithms that have proven successful in go reflect a new converging understanding of decision-making in artificial intelligence and the behavioral sciences rooted in the balancing of exploitation and exploration in search. To understand what these two terms mean, consider the commonly used metaphor of someone trying to climb a mountain. [80]

Imagine that you are trying to find a path that leads to the highest point on a mountain range. You have a choice of either sampling new paths that might potentially lead to the goal (without knowing where they will lead) or following paths that you know take you up the mountain range. Computer go algorithms (usually referred to as Monte Carlo tree search algorithms) have to solve this problem in some shape or form. [81] Just as the problem of dealing with the impossible amount of possible choices to consider in chess was seen as representative of organizational decision-making problems, the exploration/exploitation tradeoff has commonly been used to describe organizational decision problems. [82]

Finally, one should observe that general game playing research in artificial intelligence – whether the games are traditional games, computer games, or arcade games – have been testing grounds for AI that can play any arbitrary game given to it. Researchers believe that these environments can help shed light on less narrow and modular intelligence; after all, Deep Blue could beat Garry Kasparov at chess but would be completely unable to play poker or backgammon. [83] Recent advances in machine learning, for example, have exploited the general game playing environment called the Atari Learning Environment. [84]

While Cold War problems of rational decision-making under constraint played a prominent role in motivating game-playing research, these security motivations also shared room with different beliefs about the justifications for scientists programming computers to move knights and pawns around. Chess was seen as a simple yet tremendously complex environment for studying higher-order properties of intelligence. [85] To some extent, this raises a larger question of whether games are a valid substitute for the real world at all. Success even in a highly difficult game does not guarantee that we have learned much about how a computer program could model or replace human decision-making in the real world. [86]

Games after all, are contrived environments played by those who have already been socialized into their rulesets and conventions. However, intelligence, cognition, and rationality in the real world likely emerges from how humans and other animals deal with unstructured and dynamically changing environments. [87] Inspiration for some of today’s most interesting research in cognitive science and artificial intelligence comes from the immense learning and development performed by human infants, not chess grandmasters. [88] Perhaps this is in keeping with Turing’s belief that the best way to produce machine intelligence was to teach it like a parent instructs a child. [89] However, it also might have confounded researchers in the behavioral and computational sciences exerting so much time and energy to capture the complex knowledge and heuristics of chess experts if they knew that decades later, many of their successors would be trying to capture the hidden wisdom of babies. [90]

Nonetheless, inasmuch as strategy games captured core aspects of the larger class of what one cognitive scientist dubbed “adversarial problem-solving,” research in how machines and humans played them contributed much to our understanding of how people find solutions under severe constraint. [91] We still have much to learn about how people and machines play from even simple adversarial games such as the child’s game of rock-paper-scissors. [92] And we have barely even begun to explore far more complex strategy games than chess or go that are played and enjoyed by millions of gamers worldwide. [93]

Much of this work increasingly dovetails with efforts to create better computer generated forces for military and security training, simulation, and modeling. [94]

Human After All: Lessons for Policy Analysis

While it is often asserted that current technological problems in security and conflict are unprecedented and that we can learn little from history, this article uses a historical analysis of game-playing research in artificial intelligence and cognitive science to suggest otherwise.  There is much we can learn from even the recent past concerning technological problems in security policy. A look at Cold War concerns over rational decision-making and differing conceptions of procedural rationality in game domains suggests that the biggest problems for mechanizing security decision-making lie in how we can understand whether or not a program is acting efficiently to realize the goals and preferences we have encoded in it. While this presumption is debatable, it also recognizes that both humans and machines are limited in their information processing power and other variegated capacities for finding solutions to complex problems.

Perhaps the most important lesson we can learn is that even the most advanced machines are simply extensions of ourselves. [95] Long before we dreamt of intelligent machines, we described both ourselves and other animals in mechanical and machine-like terms. [96] We have always also relied on machines and technology to augment and extend our humanity, and this capability has been key to the success of human civilization. [97] Even today, the notion of “autonomous” machines ignores the very real role of their human creators and users in practical use-cases. [98] Hence, the most difficult problem of computing in security policy is the degree to which computers are both products of deliberate construction and self-perceptions and behave remarkably different than humans do in the same task environments. [99]

However, history suggests that understanding the design assumptions behind how technical artifacts are built can help us better predict, control, and understand them. Computer systems may act rationally, but the definition of rationality used to construct may vary widely. Because there is a lack of fundamental consensus as to how to abstract and model fundamental aspects of intelligence and rationality in computers, systems will likely vary widely in how they go about making decisions. [100]

While policymakers may believe that a system of relevance in a security problem will behave in a certain way, pioneering computer and cognitive scientists presciently noted decades ago that no one really knows what a complex computational artifact will do until the program is actually run. [101] Even if computers are incapable of generating real novelty, Turing correctly notes that they are nonetheless always capable of surprise. [102] Or, less romantically, they do what you tell them to, not necessarily what you want them to. [103] Computers are “stupid” and in practice many programs are often hostage to the basic assumptions of their programmers. [104]

Policymakers should ask the scientists and technicians responsible for whatever model or system that the security organization relies on what assumptions they have made about the decision environment and how the program copes with it before they take any answer or solution the machine supplies them on faith. They should also frequently challenge their own assumptions regarding what the computers will do. Computers can certainly be tools that help deliver useful answers to problems. They can also be yet another means for policymakers to deceive themselves about whether they are making progress on a problem or making it worse. [105]

In closing, none of this should be taken as an argument that the only problems with computers are policymakers not using them appropriately --- although the misuse of technology is sadly a constant in American security history. [106] At the end of the day, even the most intelligent system is, after all, no match for genuine human stupidity. But there is a concrete reason nonetheless why fighter pilot John Boyd put people and ideas ahead of machines.  [107] Boyd understood something that scientists today increasingly recognize: creativity, innovation, and novelty cannot be programmed. [108] They have to arise organically from interaction with a complex world, and from experimentation and repeated failure. Humans are still far better at this than computers.

Nonetheless, if we can bolster the grey matter in between our ears with an external brain in the form of computing, we should carefully consider the opportunities, risks, and uncertainties involved in doing so. While we cannot predict the future of fast-moving technologies, we can at least look to the past for guidance, understanding, and continuity.

End Notes

[1] See, for example, Adams, Thomas K. "Future warfare and the decline of human decisionmaking." Parameters 31.4 (2001): 57-71, Royakkers, Lambèr, and Rinie van Est. "A literature review on new robotics: automation from love to war." International Journal of Social Robotics (2015): 1-22, Nourbakhsh, Illah Reza. "The Coming Robot Dystopia." Foreign Affairs 94.4 (2015): 23-28, Helbing, Dirk. "Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies." arXiv preprint arXiv:1504.03751 (2015), Stevens, Tim. "Information warfare: A response to Taddeo." Philosophy & Technology 26.2 (2013): 221-225, Taddeo, Mariarosaria. "Information warfare: a philosophical perspective." Philosophy & Technology 25.1 (2012): 105-120, and Brey, Philip. "Freedom and privacy in ambient intelligence." Ethics and Information Technology 7.3 (2005): 157-166.

[2] Al-Rodhan, Nayef RF. The Politics of Emerging Strategic Technologies: Implications for Geopolitics, Human Enhancement and Human Destiny. Palgrave Macmillan, 2011.

[3] See Cederman, Lars-Erik, T. Warren, and Didier Sornette. "Testing Clausewitz: Nationalism, mass mobilization, and the severity of war." International Organization 65.04 (2011): 605-638, Knox, MacGregor, and Williamson Murray, eds. The Dynamics of Military Revolution, 1300–2050. Cambridge University Press, 2001, Bousquet, Antoine. The scientific way of warfare: Order and chaos on the battlefields of modernity. Cinco Puntos Press, 2009, Gray, Colin S. War, peace and international relations: an introduction to strategic history. Routledge, 2013, and Schneider, James Joseph. The Structure of Strategic Revolution: Total War and the Roots of the Soviet Warfare State. Presidio Press, 1994.

[4] See Brody, Herb. "Great expectations: why predictions go awry." Journal of Consumer Marketing 10.1 (1993): 23-27, and Biddle, Stephen. "The past as prologue: Assessing theories of future warfare." Security Studies 8.1 (1998): 1-74.

[5] Neustadt, Richard E. Thinking in time: The uses of history for decision makers. Simon and Schuster, 2011.

[6] For a recent overview of some of these claims, see McMaster, Herbert R. "On war: lessons to be learned." Survival 50.1 (2008): 19-30.

[7] Rid, Thomas. "Cyber war will not take place." Journal of Strategic Studies 35.1 (2012): 5-32.

[8] See Zaloga, Steven J. Unmanned Aerial Vehicles: Robotic Air Warfare 1917-2007. Vol. 144. Osprey Publishing, 2008, Mahnken, Thomas G. Technology and the American way of war. Columbia University Press, 2008, Mindell, David A. Between human and machine: feedback, control, and computing before cybernetics. John Hopkins University Press, 2002, Dyson, George. Darwin among the machines: The evolution of global intelligence. Da Capo Press, 1998, Mirowski, Philip. Machine dreams: Economics becomes a cyborg science. Cambridge University Press, 2002, Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997, Kline, Ronald R. The Cybernetics Moment: Or Why We Call Our Age the Information Age. John Hopkins University Press, 2015.

[9] Binsted, Kim. "Sufficiently advanced technology: using magic to control the world." CHI'00 Extended Abstracts on Human Factors in Computing Systems. ACM, 2000.

[10] See Lawson, Sean. "Beyond Cyber-Doom: Assessing the Limits of Hypothetical Scenarios in the Framing of Cyber-Threats." Journal of Information Technology & Politics 10.1 (2013): 86-103, Dunn Cavelty, and Myriam. "From Cyber‐Bombs to Political Fallout: Threat Representations with an Impact in the Cyber‐Security Discourse." International Studies Review 15.1 (2013): 105-122, and Lee, Robert M., and Thomas Rid. "OMG Cyber! Thirteen Reasons Why Hype Makes for Bad Policy." The RUSI Journal 159.5 (2014): 4-12.

[11] See Gilman, Nils. Mandarins of the future: Modernization theory in Cold War America. John Hopkins University Press, 2003, Murray, Williamson. "Clausewitz out, computer in: military culture and technological hubris." The National Interest (1997): 57-64, Bousquet, Antoine. "Cyberneticizing the American war machine: science and computers in the Cold War." Cold War History 8.1 (2008): 77-102, Bousquet, Antoine. "Chaoplexic warfare or the future of military organization." International Affairs 84.5 (2008): 915-929, and Lawson, Sean. "Cold War military systems science and the emergence of a nonlinear view of war in the US military." Cold War History 11.3 (2011): 421-440,

[12] Armstrong, B.J. “Unmanned naval warfare: retrospect and prospect,” Armed Forces Journal, 20 December 2013. http://www.armedforcesjournal.com/unmanned-naval-warfare-retrospect-prospect/

[13] See Chib, Siddhartha, and Edward Greenberg. "Understanding the metropolis-hastings algorithm." The American Statistician 49.4 (1995): 327-335, Green, Peter J., et al. "Bayesian computation: a summary of the current state, and samples backwards and forwards." Statistics and Computing 25.4 (2015): 835-862, and Hitchcock, David B. "A history of the Metropolis–Hastings algorithm." The American Statistician 57.4 (2003).

[14]See Edwards, Paul N. "The closed world: Systems discourse, military strategy and post WWII American historical consciousness." AI & society 2.3 (1988): 245-255, Olazaran, Mikel. "A sociological study of the official history of the perceptrons controversy." Social Studies of Science 26.3 (1996): 611-659, Woolsey, Gene. "On Inexpert Systems and Natural Intelligence in Military Operations Research." Interfaces 21.4 (1991): 2-10, Forsyth, Richard S. "The strange story of the Perceptron." Artificial Intelligence Review 4.2 (1990): 147-155, Beusmans, Jack, and Karen Wieckert. "Computing, research, and war: if knowledge is power, where is responsibility?." Communications of the ACM 32.8 (1989): 939-951, and Slayton, Rebecca. "Speaking as scientists: Computer professionals in the Star Wars debate." History and technology 19.4 (2003): 335-364.

[15] Rochlin, Gene I. "Iran Air Flight 655 and the USS Vincennes." Social responses to large technical systems. Springer Netherlands, 1991. 99-125.

[16] For a primer, see Hou, Ming, Simon Banbury, and Catherine Burns. Intelligent Adaptive Systems: An Interaction-centered Design Perspective. CRC Press, 2014.

[17] See Mongillo, Gianluigi, Hanan Shteingart, and Yonatan Loewenstein. "Race Against the Machine." Proceedings of the IEEE 102.4 (2014) and Andreessen, Marc. "Why Software Is Eating The World'." Wall Street Journal, 20 August 2011. http://www.wsj.com/articles/SB10001424053111903480904576512250915629460

[18] Parkes, David C., and Michael P. Wellman. "Economic reasoning and artificial intelligence." Science 349.6245 (2015): 267-272.

[19] Arthur, W. Brian. "The second economy." McKinsey Quarterly 4 (2011): 90-99.

[20] Arai, Tamio, Enrico Pagello, and Lynne E. Parker. "Editorial: Advances in multi-robot systems." IEEE Transactions on robotics and automation 18.5 (2002): 655-661.

[21] Swarm robotics, for example, will almost assuredly come with a host of difficult cybersecurity problems. Higgins, Fiona, Allan Tomlinson, and Keith M. Martin. "Survey on security challenges for swarm robotics." Autonomic and Autonomous Systems, 2009. ICAS'09. Fifth International Conference on. IEEE, 2009.

[22] See, for example Reed, Sidney G., Richard H. Van Atta, and Seymour J. Deitchman. DARPA Technical Accomplishments. An Historical Review of Selected DARPA Projects. Volume 1. No. IDA-P-2192. Institute for Defense Analyses, 1990, and Fuchs, Erica RH. "Rethinking the role of the state in technology development: DARPA and the case for embedded network governance." Research Policy 39.9 (2010): 1133-1147.

[23] See Rovira, Ericka, Kathleen McGarry, and Raja Parasuraman. "Effects of imperfect automation on decision making in a simulated command and control task." Human Factors: The Journal of the Human Factors and Ergonomics Society 49.1 (2007): 76-87, Parasuraman, Raja, Keryl A. Cosenzo, and Ewart De Visser. "Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload." Military Psychology 21.2 (2009): 270, Veverka, Jesse P., and Mark E. Campbell. "Operator decision modeling for intelligence, surveillance and reconnaissance type missions." Systems, Man and Cybernetics, 2005 IEEE International Conference on. Vol. 1. IEEE, 2005, Tyugu, Enn. "Artificial intelligence in cyber defense." Cyber Conflict (ICCC), 2011 3rd International Conference on. IEEE, 2011, Kotenko, Igor, Alexey Konovalov, and Andrey Shorov. "Agent-based modeling and simulation of botnets and botnet defense." Conference on Cyber Conflict. CCD COE Publications. Tallinn, Estonia. 2010, Chen, Hsinchun, and Fei-Yue Wang. "Guest editors' introduction: artificial intelligence for homeland security." Intelligent Systems, IEEE 20.5 (2005): 12-16, Pearsall, Beth. "Predictive policing: The future of law enforcement." National Institute of Justice Journal 266 (2010): 16-19, Vlahos, James. "The department of pre-crime." Scientific American 306.1 (2012): 62-67, Nix, Justin. "Predictive Policing." Critical Issues in Policing: Contemporary Readings (2015): 275, Stroshine, Meghan S. "Technological Innovations in Policing." Critical Issues in Policing: Contemporary Readings 911 (2015): 229, Chen, J. Y. C., and P. I. Terrence. "Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment." Ergonomics 52.8 (2009): 907-920, Brynielsson, Joel, et al. "Development of computerized support tools for intelligence work." Proceedings of the 14th International Command and Control Research and Technology Symposium (14th ICCRTS). 2009, Schum, D. A., et al. "Toward cognitive assistants for complex decision making under uncertainty." Intelligent Decision Technologies 8.3 (2014): 231-250, Tecuci, Gheorghe, et al. "Computational approach and cognitive assistant for evidence-based reasoning in intelligence analysis." International Journal of Intelligent Defence Support Systems 5.2 (2014): 146-172, Phillips, Joshua, Jay Liebowitz, and Kenneth Kisiel. "Modeling the Intelligence Analysis Process for Intelligent User Agent Development." Research and Practice in Human Resource Management 9.1 (2001): 59-73, and Chen, Hsinchun, and Jennifer Xu. "Intelligence and security informatics." Annual review of information science and technology 40.1 (2006): 229-289, Boroomand, Farzam, et al. "Cyber security for smart grid: a human-automation interaction framework." Innovative Smart Grid Technologies Conference Europe (ISGT Europe), 2010 IEEE PES. IEEE, 2010, Wagner, Steffen, et al. "Autonomous, collaborative control for resilient cyber defense (ACCORD)." Self-Adaptive and Self-Organizing Systems Workshops (SASOW), 2012 IEEE Sixth International Conference on. IEEE, 2012, Lin, Herbert. "Lifting the veil on cyber offense." IEEE Security & Privacy 4 (2009): 15-21, Barrett, Edward T. "Warfare in a new domain: The ethics of military cyber-operations." Journal of Military Ethics 12.1 (2013): 4-17, Prescott, Jody M. "Autonomous decision-making processes and the responsible cyber commander." Cyber Conflict (CyCon), 2013 5th International Conference on. IEEE, 2013, and Heinl, Caitriona H. "Artificial (intelligent) agents and active cyber defence: Policy implications." Cyber Conflict (CyCon 2014), 2014 6th International Conference On. IEEE, 2014.

[24] See Tambe, Milind. Security and game theory: algorithms, deployed systems, lessons learned. Cambridge University Press, 2011 for an overview.

[25] Delle Fave, Francesco Maria, et al. "Security games in the field: an initial study on a transit system." Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 2014.

[26] Luber, Samantha, et al. "Game-theoretic patrol strategies for transit systems: the TRUSTS system and its mobile app." Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 2013.

[27] Yin, Zhengyu, et al. "TRUSTS: Scheduling randomized patrols for fare inspection in transit systems using game theory." AI Magazine 33.4 (2012): 59.

[28] Jiang, Albert Xin, et al. "Towards Optimal Patrol Strategies for Fare Inspection in Transit Systems." AAAI Spring Symposium: Game Theory for Security, Sustainability, and Health. 2012, and

[29] Jiang, Albert Xin, et al. "Game-theoretic randomization for security patrolling with dynamic execution uncertainty." Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 2013.

[30] Sullivan, John P., and James J. Wirtz. "Terrorism early warning and counterterrorism intelligence." International Journal of Intelligence and Counterintelligence 21.1 (2007): 13-25.

[31] Heal, Sid. Sound doctrine: A tactical primer. Lantern Books, 2000.

[32] See Kennedy, William G. "Modelling human behaviour in agent-based models." Agent-based models of geographical systems. Springer Netherlands, 2012. 167-179, and West, Robert L., Christian Lebiere, and Dan J. Bothell. "Cognitive architectures, game playing, and human evolution." Cognition and multi-agent interaction: From cognitive modeling to social simulation (2006): 103-123.

[33] Motwani, Rajeev, and Prabhakar Raghavan. Randomized algorithms. Chapman & Hall/CRC, 2010.

[34] Yin, Zhengyu, et al. "TRUSTS: Scheduling randomized patrols for fare inspection in transit systems using game theory." AI Magazine 33.4 (2012): 59.

[35] See Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. "Does automation bias decision-making?." International Journal of Human-Computer Studies 51.5 (1999): 991-1006, Mosier, Kathleen L., et al. "Automation bias: Decision making and performance in high-tech cockpits." The International journal of aviation psychology 8.1 (1998): 47-63, Bainbridge, Lisanne. "Ironies of automation." Automatica 19.6 (1983): 775-779, Parasuraman, Raja, and Dietrich H. Manzey. "Complacency and bias in human use of automation: An attentional integration." Human Factors: The Journal of the Human Factors and Ergonomics Society 52.3 (2010): 381-410, and Skitka, Linda J., Kathleen Mosier, and Mark D. Burdick. "Accountability and automation bias." International Journal of Human-Computer Studies 52.4 (2000): 701-717.

[36] MacKenzie, Donald. Mechanizing proof: computing, risk, and trust. MIT Press, 2004.

[37] Rey, PJ. “The Myth of Cyberspace.” The New Inquiry. 13 April 2012. http://thenewinquiry.com/essays/the-myth-of-cyberspace/

[38] Gergely, György, et al. "Taking the intentional stance at 12 months of age." Cognition 56.2 (1995): 165-193.

[39] Georgeff, Michael, et al. "The belief-desire-intention model of agency." Intelligent Agents V: Agents Theories, Architectures, and Languages. Springer Berlin Heidelberg, 1999. 1-10.

[40] Gordon, Robert. "Folk psychology as simulation." Mind and language 1.2 (1986): 158-171.

[41] Wooldridge, Michael J. Reasoning about rational agents. MIT press, 2000.

[42] Verschure, Paul FMJ, and Philipp Althaus. "A real-world rational agent: unifying old and new AI." Cognitive science 27.4 (2003): 561-590.

[43] Russell, Stuart Jonathan, and Eric Wefald. Do the right thing: studies in limited rationality. MIT press, 1991.

[44] See Russell, Stuart J. "Rationality and intelligence." Artificial intelligence 94.1 (1997): 57-77, and Stanovich, Keith E. "On the distinction between rationality and intelligence: Implications for understanding individual differences in reasoning." The Oxford handbook of thinking and reasoning (2012): 343-365.

[45] Gallagher, Helen L., et al. "Imaging the intentional stance in a competitive game." Neuroimage 16.3 (2002): 814-821.

[46] Dennett, Daniel C. "Intentional systems." The Journal of Philosophy 68.4 (1971): 87-106, and Dennett, Daniel C. "Précis of the intentional stance." Behavioral and brain sciences 11.03 (1988): 495-505,

[47] McFarland, David. Guilty robots, happy dogs: the question of alien minds. Oxford University Press, 2008.

[48] Brom, Cyril, and Joanna Bryson. "Action selection for intelligent systems." European Network for the Advancement of Artificial Cognitive Systems (2006)

[49] Bryson, Joanna Joy. Intelligence by design: principles of modularity and coordination for engineering complex adaptive agents. Diss. Massachusetts Institute of Technology, 2001, and Bryson, J. J. "Action Selection and Individuation in Agent Based Modelling." Proceedings of Agent 2003: Challenges in Social Simulation (2003): 317-330.

[50] See Bryson, Joanna. "Cross-paradigm analysis of autonomous agent architecture." Journal of Experimental & Theoretical Artificial Intelligence 12.2 (2000): 165-189, and Sturm, Thomas. "The “rationality wars” in psychology: Where they are and where they could go." Inquiry 55.1 (2012): 66-81.

[51] See Boden, Margaret Ann. Mind as machine: A history of cognitive science. Oxford University Press, 2006 as well as Husbands, Philip, Owen Holland, and Michael Wheeler. The mechanical mind in history. MIT Press, 2008.

[52] See Shanker, Stuart G. Wittgenstein's Remarks on the Foundations of AI. Routledge, 2002, and Copeland, B. Jack. "The broad conception of computation." American Behavioral Scientist 40.6 (1997): 690-716.

[53] There is an enormous set of literature on this. Good summaries of the literature may be found in Gilman, Nils. "The cold war as intellectual force field." Modern Intellectual History (2014): 1-17 and Freedman, Lawrence. "Social Science and the Cold War." Journal of Strategic Studies ahead-of-print (2015): 1-21.

[54] Sun, Ron. "Prolegomena to integrating cognitive modeling and social simulation." Cognition and multi-agent interaction: from cognitive modeling to social simulation (2006): 3-26.

[55] See Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997, Erickson, Paul, et al. How reason almost lost its mind: The strange career of Cold War rationality. University of Chicago Press, 2013, Mirowski, Philip. Machine dreams: Economics becomes a cyborg science. Cambridge University Press, 2002, and Heyck, Hunter. Age of System: Understanding the Development of Modern Social Science. John Hopkins University Press, 2015.

[56] Erickson, Paul, et al. How reason almost lost its mind: The strange career of Cold War rationality. University of Chicago Press, 2013.

[57] Hounshell, David. "The Cold War, RAND, and the generation of knowledge, 1946-1962." Historical Studies in the Physical and Biological Sciences (1997): 237-267, Barros, Gustavo. "Herbert A. Simon and the concept of rationality: boundaries and procedures." Revista de Economia Política 30.3 (2010): 455-472, Sent, Esther-Mirjam. "Herbert A. Simon as a cyborg scientist." Perspectives on Science 8.4 (2000): 380-406.

[58] Griffith, Robert. "The cultural turn in Cold War studies." Reviews in American history 29.1 (2001): 150-157.

[59] Galison, Peter. "The ontology of the enemy: Norbert Wiener and the cybernetic vision." Critical inquiry (1994): 228-266.

[60] Freedman, Lawrence. "Social Science and the Cold War." Journal of Strategic Studies ahead-of-print (2015): 1-21

[61] See Boden, Margaret Ann. Mind as machine: A history of cognitive science. Oxford University Press, 2006, Varela, Francisco J., and Jean-Pierre Dupuy, eds. Understanding Origins: Contemporary Views on the Origins of Life, Mind and Society. Springer Science & Business Media, 2013, and Franchi, Stefano, and Güven Güzeldere. Mechanical bodies, computational minds: artificial intelligence from automata to cyborgs. MIT Press, 2005.

[62] Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997.

[63] Simon, Herbert A. "Rationality as process and as product of thought." The American economic review (1978): 1-16, Simon, Herbert A. "Invariants of human behavior." Annual review of psychology 41.1 (1990): 1-20, and Simon, Herbert A. The sciences of the artificial. MIT Press, 1996.

[64] Ayson, Robert. Thomas Schelling and the nuclear age: strategy as social science. Routledge, 2004.

[65] Erickson, Paul, et al. How reason almost lost its mind: The strange career of Cold War rationality. University of Chicago Press, 2013.

[66] Cohen-Cole, Jamie. The open mind: Cold War politics and the sciences of human nature. University of Chicago Press, 2014.

[67] Roland, Alex, and Philip Shiman. Strategic computing: DARPA and the quest for machine intelligence, 1983-1993. MIT Press, 2002.

[68] See Wallis, W. Allen. "The statistical research group, 1942–1945." Journal of the American Statistical Association 75.370 (1980): 320-330, and Licklider, Joseph Carl Robnett. "Man-computer symbiosis." Human Factors in Electronics, IRE Transactions on 1 (1960): 4-11,

[69] Brodie, Bernard. "Strategy as a Science." World Politics 1.04 (1949): 467-488.

[70] Gaddis, John Lewis. We now know: rethinking Cold War history. Oxford University Press, 1997.

[71]Von Hilgers, Philipp. War games: a history of war on paper. MIT Press, 2012.

[72] Nielsen, Jon Lau, et al. "AI for General Strategy Game Playing." Handbook of Digital Games (2014): 274-304.

[73] See Gobet, Fernand, Jean Retschitzki, and Alex de Voogt. Moves in mind: The psychology of board games. Psychology Press, 2004, Schaeffer, Jonathan, and H. Jaap Van den Herik. "Games, computers, and artificial intelligence." Artificial Intelligence 134.1 (2002): 1-7, Schaeffer, Jonathan. "The games computers (and people) play." Advances in computers 52 (2000): 189-266, Schaeffer, Jonathan, and H. Jaap van den Herik. Chips challenging champions: Games, computers and artificial intelligence. Elsevier, 2002, and Charness, Neil. "Expertise in chess and bridge." Complex information processing: The impact of Herbert A. Simon (1989): 183-208.

[74] Ensmenger, Nathan. "Is chess the drosophila of artificial intelligence? A social history of an algorithm." Social Studies of Science 42.1 (2012): 5-30.

[75] See Charness, Neil. "Expertise in chess and bridge." Complex information processing: The impact of Herbert A. Simon (1989): 183-208 and Thompson, Ken. Computers, Chess, and Cognition. Eds. T. Anthony Marsland, and Jonathan Schaeffer. Springer Science & Business Media, 2012.

[76] Simon, Herbert A., and Jonathan Schaeffer. The game of chess. No. AIP-105. CARNEGIE-MELLON UNIV PITTSBURGH PA ARTIFICIAL INTELLIGENCE AND PSYCHOLOGY PROJECT, 1990.

[77] See Leonard, Robert. Von Neumann, Morgenstern, and the creation of game theory: From chess to social science, 1900–1960. Cambridge University Press, 2010, Ensmenger, Nathan. "Is chess the drosophila of artificial intelligence? A social history of an algorithm." Social Studies of Science 42.1 (2012): 5-30, and Charness, Neil. "The impact of chess research on cognitive science." Psychological research 54.1 (1992): 4-9.

[78] Newell, Allen, John Calman Shaw, and Herbert Alexander Simon. "Chess-playing programs and the problem of complexity." IBM Journal of Research and Development 2.4 (1958): 320-335.

[79] Lai, David. Learning from the Stones: A Go Approach to Mastering China's Strategic Concept, Shi. Strategic Studies Institute, 2004.

[80] For example, see Derrida, Bernard, and Luca Peliti. "Evolution in a flat fitness landscape." Bulletin of Mathematical Biology 53.3 (1991): 355-382.

[81] Gershman, Samuel J., Eric J. Horvitz, and Joshua B. Tenenbaum. "Computational rationality: A converging paradigm for intelligence in brains, minds, and machines." Science 349.6245 (2015): 273-278.

[82] Laureiro‐Martínez, Daniella, et al. "Understanding the exploration–exploitation dilemma: An fMRI study of attention control and decision‐making performance." Strategic Management Journal 36.3 (2015): 319-338.

[83] See Genesereth, Michael, Nathaniel Love, and Barney Pell. "General game playing: Overview of the AAAI competition." AI Magazine 26.2 (2005): 62, Levine, John, et al. "General Video Game Playing." Artificial and Computational Intelligence in Games 6 (2013): 77-83, and Bellemare, Marc G., et al. "The arcade learning environment: An evaluation platform for general agents." arXiv preprint arXiv:1207.4708 (2012).

[84] Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.

[85] McCarthy, John. "Chess as the Drosophila of AI." Computers, chess, and cognition. Springer New York, 1990. 227-237.

[86] Brooks, Rodney A. "Elephants don't play chess." Robotics and autonomous systems 6.1 (1990): 3-15.

[87] Weng, Juyang. "Developmental robotics: Theory and experiments." International Journal of Humanoid Robotics 1.02 (2004): 199-236.

[88] Asada, Minoru, et al. "Cognitive developmental robotics: a survey." Autonomous Mental Development, IEEE Transactions on 1.1 (2009): 12-34, Lungarella, Max, et al. "Developmental robotics: a survey." Connection Science 15.4 (2003): 151-190, and Minato, Takashi, et al. "CB2: A child robot with biomimetic body for cognitive developmental robotics." Humanoid Robots, 2007 7th IEEE-RAS International Conference on. IEEE, 2007.

[89] Nilsson, Nils J. "Human-level artificial intelligence? Be serious!." AI magazine 26.4 (2005): 68.

[90] Cangelosi, Angelo, Matthew Schlesinger, and Linda B. Smith. Developmental robotics: From babies to robots. MIT Press, 2015.

[91] Thagard, Paul. "Adversarial problem solving: Modeling an opponent using explanatory coherence." Cognitive Science 16.1 (1992): 123-149.

[92] See West, Robert L., et al. "Stochastic resonance in human cognition: ACT-R versus game theory, associative neural networks, recursive neural networks, q-learning, and humans." 27th annual meeting of the cognitive science society. 2005, and West, Robert L., Christian Lebiere, and Dan J. Bothell. "Cognitive architectures, game playing, and human evolution." Cognition and multi-agent interaction: From cognitive modeling to social simulation (2006): 103-123.

[93] See Lewis, Joshua M., Patrick Trinh, and David Kirsh. "A corpus analysis of strategy video game play in Starcraft: Brood war." Proceedings of the 33rd annual conference of the cognitive science society. 2011, and Robertson, Glen, and Ian Watson. "A review of real-time strategy game AI." AI Magazine 35.4 (2014): 75-204.

[94] See Laird, John E. "An exploration into computer games and computer generated forces." The 8th Conference on Computer Generated Forces & Behavior Representation.–Orlando, Florida, USA.–May. 2000, Laird, John, and Michael VanLent. "Human-level AI's killer application: Interactive computer games." AI magazine 22.2 (2001): 15, Herz, Jessie Cameron, and Michael R. Macedonia. "Computer games and the military: Two views." Defense Horizons 11-12 (2002): 1, Buro, Michael. "Real-time strategy games: A new AI research challenge." IJCAI. 2003, Buro, Michael, and Timothy Furtak. "RTS games as test-bed for real-time AI research." Proceedings of the 7th Joint Conference on Information Science (JCIS 2003). 2003, and Buro, Michael, and Timothy Furtak. "RTS games and real-time AI research." Proceedings of the Behavior Representation in Modeling and Simulation Conference (BRIMS). Vol. 6370. 2004.

[95] Boden, Margaret A. Artificial intelligence and natural man. Harvester Press, 1977.

[96] See Dupuy, Jean-Pierre. On the origins of cognitive science. A Bradford Book, 2009, and Boden, Margaret A. "Artificial intelligence and animal psychology." New Ideas in Psychology 1.1 (1983): 11-33.

[97] See Clark, Andy. Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford University Press, 2004, Hutchins, Edwin. Cognition in the Wild. MIT press, 1995, Turkle, Sherry. Evocative objects: Things we think with. MIT press, 2011, Gill, Karamjit S., ed. Human machine symbiosis: the foundations of human-centred systems design. Springer Science & Business Media, 2012, Licklider, Joseph Carl Robnett. "Man-computer symbiosis." Human Factors in Electronics, IRE Transactions on 1 (1960): 4-11 and Licklider, Joseph Carl Robnett, and Welden E. Clark. "On-line man-computer communication." Proceedings of the May 1-3, 1962, Spring Joint Computer Conference. ACM, 1962.

[98] Goldberg, Ken. "Robotics: Countering singularity sensationalism." Nature 526.7573 (2015): 320-321.

[99] See Ekbia, Hamid Reza. Artificial dreams: The quest for non-biological intelligence. Cambridge University Press, 2008 and Koch, Christof. "Do Androids Dream?." Scientific American Mind 26.6 (2015): 24-27.

[100] See Bryson, J. J. "Action Selection and Individuation in Agent Based Modelling." Proceedings of Agent 2003: Challenges in Social Simulation (2003): 317-330, Lehman, Joel, Jeff Clune, and Sebastian Risi. "An Anarchy of Methods: Current Trends in How Intelligence Is Abstracted in AI." Intelligent Systems, IEEE 29.6 (2014): 56-62, McFarland, David. Guilty robots, happy dogs: the question of alien minds. Oxford University Press, 2008, and McFarland, David, and Tom Bösser. Intelligent behavior in animals and robots. MIT Press, 1993.

[101] Newell, Allen, and Herbert A. Simon. "Computer science as empirical inquiry: Symbols and search." Communications of the ACM 19.3 (1976): 113-126.

[102] See for example, Duch, Wlodzislaw. "Intuition, insight, imagination and creativity." Computational Intelligence Magazine, IEEE 2.3 (2007): 40-52, Turing, Alan M. "Computing machinery and intelligence." Mind (1950): 433-460, Michie, Donald. "Experiments on the Mechanization of Game-learning. 2—Rule-Based Learning and the Human Window." The Computer Journal 25.1 (1982): 105-113.

[103] McDermott, Drew. "Artificial intelligence meets natural stupidity." ACM SIGART Bulletin 57 (1976): 4-9.

[104] See Auerbach, David. “The stupidity of computers.” N+1, Winter 2012, https://nplusonemag.com/issue-13/essays/stupidity-of-computers/, and Norvig, Peter, and David Cohn. "Adaptive software." PC AI 11.1 (1997): 27-30.

[105] Gibson, James William. The perfect war: Technowar in Vietnam. Atlantic Monthly Press, 2000.

[106] Correll, John T., and John T. McNaughton. "Igloo white." Air Force Magazine 87.11 (2004): 58.

[107] Osinga, Frans PB. Science, strategy and war: The strategic theory of John Boyd. Routledge, 2007.

[108] Stanley, Kenneth O., and Joel Lehman. Why greatness cannot be planned: the myth of the objective. Springer, 2015.

 

 

About the Author(s)

Adam Elkus is a PhD student in Computational Social Science at George Mason University. He has published articles on defense, international security, and technology at Small Wars Journal, CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.

Comments

frederickgragg

Sat, 02/04/2023 - 2:52am

I think that www.certsboard.com is crucial for graduating from any university. Every student should know how to write and I think you should be able to do it, too. I would recommend you to take some extra classes or enroll in additional course.

ctownsend

Wed, 12/02/2015 - 4:04am

This was a really great read. I wonder how a machine would have coped with the uncertainty faced by Stanislav Petrov in 1983.