File photo at Aberdeen Proving Ground

File photo at Aberdeen Proving Ground DOD / Marv Lynchard

How to Plan for the Coming Era of Human-Machine Teaming

Militaries must understand the different forms it will take, and how it will change just about everything they do.

In 1899, diplomats from the world’s leading military powers convened in The Hague for a peace conference. One of the outcomes of the conference was a five-year moratorium on offensive military uses of aircraft. But countries soon began to see the potential of aerial warfare, and delegates to the second Hague conference in 1907 declined to renew the moratorium. Within a few decades, aerospace technology became nearly synonymous with military power. It is highly probable that robotics and artificial intelligence will travel a similar path.

Indeed, militaries are already looking at ways to organize better human-machine teams, which promise to boost individual and team performance, reduce threats to humans, enable new operating concepts, and ultimately boost national power. Such teams will even change how militaries do things far away from the battlefield — for example, institutional strategic planning, recruiting and training people, procurement and logistics.

To best understand and plan for this new era, militaries should pursue three lines of effort: human-robot teaming, human-AI teaming, and human augmentation.

Human-robot teams. These will have broad application across military organizations. In an enterprise approach, such combinations would be useful in military training establishments to provide consistent training outcomes, and offer testbeds for best practices in developing human-robot tasking relationships. In day-to-day logistics, robots are likely to have high utility in performing tasks that roboticists have long classified as “dull, dirty, and dangerous,” such as vehicle maintenance and repair, and basic movement tasks.

Human-AI teaming. The key driver for the use of AI in the military is the convergence of large numbers of advanced sensors, extensive communication links, and a growing flow of information. The slowest element in decision-making is becoming the human decision maker. Automated systems featuring AI appear to offer the potential to provide some relief in this process. But even if there is a suitable marriage of humans and AI, the speed at which AI is developing means that more and more functions will move beyond human comprehension and may have to be delegated to autonomous systems out of necessity. 

Human augmentation. This area is the ultimate expression of the human-machine revolution. It is an extension of centuries of human endeavor in which people sought to become faster, stronger, and smarter through the use of tools and machines. The U.S. military has been a major investor in this field, leading a variety of research projects that seek to optimize human fighting capacities. DARPA’s Accelerated Learning program, for example, seeks to apply the best practices of learning as demonstrated by neuroscience and statistical modeling.

But there are significant challenges to overcome, which might be classified as strategic, institutional, or tactical issues.

Strategically, it will be important to decide who, or what, makes the decision for a robot to kill. Some have argued that delegating the decision to target and open fire to a machine violates human dignity, and that people have the “right not to be killed by a machine.” Others, accepting that autonomous systems and artificial intelligence are playing ever-larger roles in war and society, have proposed concurrent efforts to develop technology and examine the ethics of these systems. But there are other strategic challenges. These include areas such as the impact on strategy development, the ‘barrier for entry’ for conflict, civil-military relations, rules of engagement and establishing responsibility for the actions of robots and advanced algorithms. And while potential adversaries may not demonstrate similar concerns, the nature of Western democratic society demands that these are addressed in parallel with the technological aspects of human-machine integration.

The second area of challenges is institutional. The integration of humans and machines in more tightly coupled warfighting systems will demand examination of how military organizations fight. But other institutional challenges loom. These include the institutional culture, training and education, design and implementation of secure networks, establishing common goals in human-machine training, and potentially the wholesale redesign of career paths of military personnel where there is replacement of personnel in some specialities and creation of new ones.

Finally, tactical challenges. Even as AI promises to help decision-makers deal with floods of data, the addition of robots to combat units may increase the cognitive loads on other personnel. For example, it’s possible that an infantry soldier might be responsible for several air and ground unmanned vehicles, while also having to operate in a human team. Under normal circumstances, this will be demanding. In combat, it will place severe cognitive load on the soldier. And other tactical challenges are apparent. These include areas such as consent by personnel to the risk of working with potentially malfunctioning machines, the level of delegation authorities, and human endurance in human-machine teams.

Given all this, it is clear that a military that seeks to employ human-machine teams must achieve an effective convergence of technology, operating concepts and new organizations. In the short term – the next five years — research, experimentation and planning will be critical. Military force designers and strategic planners should focus on monitoring, designing, and experimenting. This will inform longer-term goals and the prioritization of resources in the next period, to 2030 or so, when military forces will focus on deciding, investing, and shaping. Early in the next decade, military forces will need to make substantial decisions about their future. These decisions will include the shape, size and look of the joint force. Other decisions needed will include the balance of combat and non-combat work forces, and how training and education systems must be adapted to this new human-machine construct.

Finally, various military organiations will approach the challenges of human-machine teaming differently. Variations in military culture, national strategy, and societal expectations will ensure a multitude of solutions are derived from different military institutions. This variety of institutional strategies developed for human-machine teaming is a good thing. It will provide for sharing of lessons and cross-pollination of best practices among Western military organizations.

Maj. Gen. Mick Ryan is the author, most recently, of "Human-Machine Teaming for Future Ground Forces," a new report from the Center for Budgetary and Strategic Assessment.