In April, thought leaders from the Defense Department, the U.S. Army Research Lab, the Institute for Defense Analysis, and national security thinkers across academia met for a two-week workshop on the next three and a half decades of war. The report they recently produced reads like a Tolkien-esque saga set in the future, a fascinating mashup of futuristic concepts, far-off capabilities, and emergent technologies that play off one another, competing and evolving at hyperspeed. Among the report’s most significant conclusions: faster “battle rhythm” will increasingly push human beings out of the decision-making loop. The future of war belongs to the bots.
Keep in mind that the authors of the report (“Visualizing the Tactical Ground Battlefield in the Year 2050”) note that their views “do not reflect positions or views of their employers or any organizations with which they are affiliated.” It’s easy to see why. Many of them diverge considerably from standard military talking points.
Consider the use of armed drones. Even as military leaders push to increase autonomy, they insist that a human will always decide when and whether to pull the trigger. (Robots don’t kill you, people do.) The report foresees a slightly different future: humans won’t be entirely cut out of lethal engagements, but they’ll play umpire, rather than pitcher. It’s the difference between being “in the loop,” or simply “on the loop” as an observer.
“The difference being that in the former, human decisions are a required step in a process and thus humans are exercising positive control; while in the latter, humans can only observe the behaviors that are taking place (and in some cases the decisions that have been made and the reasons why), but they can only act after the fact or in anticipation of expected behaviors,” says the report, which adds that top-down human control will be replaced by “large-scale self-organization” among swarming robots and human teammates.
Those people, too, will be getting a technological upgrade. “The battlefield of the future will be populated by fewer humans, but these humans would be physically and mentally augmented with enhanced capabilities that improve their ability to sense their environment, make sense of their environment, and interact with one another, as well as with ‘unenhanced humans,’ automated processes, and machines of various kinds,” says the report.
What exactly constitutes an enhanced human is a matter of technical dispute. After all, night-vision goggles represent a type of enhancement, as does armor. The military has no problem discussing future plans in those areas, but what the workshop participants anticipate goes well beyond flak jackets and gear.
Here’s another area where the workshop’s military participants predict a future that conflicts with today’s stated military policy. Talk to officials at the Defense Advanced Research Projects Agency, and they’ll tell you that soldier enhancement involving surgery or genetic manipulation is not an area of active interest. Certainly the prospect brings up ethical concerns and it may pose a number of legal ones as well. As bioethicist Patrick Lin has observed, a permanently enhanced soldier might constitute a new type of weapon that conflicts with international law. Regardless, the military has dabbled in the field, as Noah Shactman revealed in this 2007 article for Wired.
The report envisions enhancement taking several robotic steps forward. “To enable humans to partner effectively with robots, human team members will be enhanced in a variety of ways. These super humans will feature exoskeletons, possess a variety of implants, and have seamless access to sensing and cognitive enhancements. They may also be the result of genetic engineering. The net result is that they will have enhanced physical capabilities, senses, and cognitive powers. The presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution,” says the report.
Sensors will be “ubiquitous” and that will include “sensors on and inside humans.” But every enhancement introduces new vulnerabilities and attack vectors as well. The report imagines that all of that data could enable enemy forces to monitor U.S. troops’ biophysical signals and possibly even their brain states or decision-making abilities. The report calls this “cognitive modeling,” and notes that this creates great risks and opportunities. “In addition to having the information available to vastly improve individual cognitive modeling, such models offer the opportunity to disrupt adversary organizations and operations in a cost-effective manner,” says the report.
All this recalls the phrase “radical evolution,” a technofuturist term that’s key to understanding the report’s central themes and assumptions. It’s an idea borrowed from inventor and futurist Ray Kurzweil, who is largely credited with the observation that technological progress in information technology is not linear but exponential. That means that every leap in technological innovation begets two, which begets four, which begets eight, etc. When you arrive at the point where you are multiplying very large numbers by other very large numbers, the effect is a rapid explosion in technological capability. Kurzweil argues that computers’ rapid decrease in size and cost has brought us to precisely that moment in history. In his seminal 2001 essay, he says that “we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate).”
Imaginary or not, it’s that future explored in the workshop report: a future in which humanity can no longer control the rate or the effects of technological progress, at least not in the way that we attempt to do so today. That may be far more threatening than any particular enemy.