The United States faces genuinely new challenges—but tries to understand them using outmoded theories from a bygone era.
In the past two weeks, escalating hostilities brought the United States to the brink of yet another conflict in the Middle East—this time with Iran. But such a conflict might not look much like the others that American forces have fought in the 21st century.
Tank-on-tank warfare this isn’t. While crises are inherently unpredictable, Iran’s decision on Tuesday to lob missiles at bases housing American troops in Iraq might well be the last of its conventional retaliation for the American air strike that killed General Qassem Soleimani. Future hostilities are more likely to occur in cyberspace, not in physical space.
The Soleimani strike is a harbinger in other ways. Historically, targeted killing has been rare as an instrument of war because it has been so difficult technically. The last time the United States killed a major military leader of a foreign power was in World War II, when American forces shot down an airplane carrying the Japanese admiral Isoroku Yamamoto. These killings are unlikely to be so rare in the future. Because drones allow constant surveillance and can strike precise targets, states may credibly threaten so-called decapitation attacks in ways that nobody imagined possible short of all-out nuclear war.
When battlegrounds are growing invisible and leaders can be killed by airplanes without pilots, it’s fair to say that conflict is not what it used to be. The rise of cyberaggression, information warfare, autonomous weapons, and other technologies all require a thorough reevaluation of the coming era, what geopolitics will look like, and the kinds of capabilities that will give nations a strategic advantage against their competitors. Yet the United States still lacks the sort of dominant explanatory framework that can guide American policy regardless of who the president is.
It’s not for lack of trying. Many people have been grappling with how to strengthen America’s national security in an uncertain era. The far-flung outposts of these efforts range from conference rooms on Capitol Hill and offices in suburban-Virginia strip malls to hotel ballrooms and slick boardrooms in Silicon Valley. There are new Pentagon units to harness technological innovation and bipartisan national commissions on cybersecurity and artificial intelligence. (I am an expert adviser for the AI commission.) There are intelligence studies to identify baseline trends and megatrends driving the future of international-security challenges, and think-tank reports and academic workshops on the future of just about everything.
All of these initiatives are seeking to look beyond the anxieties of today to understand the threats of tomorrow. And nearly all of them start with two insights: The first is that we face a “hinge of history” moment. Emerging technologies are poised to transform societies, economies, and politics in dramatic and unprecedented ways. The second is that we need better ideas to make sense of this new world so that American interests and values can prevail.
When one of the big ideas involves calling for more big ideas, you know it’s tough out there. The technological race is challenging, but it is likely to be the easy part. It’s the ideas race—who best understands the levers and opportunities presented by technological disruption and shifts in the world’s political geography—that will determine geopolitical winners and losers. Some strategic insights provide competitive advantage; Russia recognized well before the United States did, for instance, that the rise of social media magnified the impact of information warfare.
Other strategic insights, if widely shared, become invaluable guides to democratic policy making and cooperation, enabling like-minded states to thwart repression and aggression of authoritarian regimes. How are military strategists and average American voters alike supposed to understand the world now confronting them—and decide which conflicts to undertake and how?
In unsettled moments like the current one, the cost of a conceptual mistake is high. At the end of World War II, the U.S. found itself locked in confrontation with the Soviet Union, a former ally that sought to export its own revolutionary ideology, communist economic system, and repressive governance around the world. American strategists built a foreign policy for the next half century around the strategy of containment developed by George Kennan in his famous 1947 “X” article. A career diplomat and Russia expert, Kennan believed that winning the superpower conflict required, above all, patience. The United States, he argued, should use every element of national power—including economics, diplomacy, and military force—to contain the spread of communism. Eventually, he predicted, the Soviet Union would collapse from its own weaknesses. Every president from Harry Truman to George H. W. Bush pursued containment in various ways. Not every policy worked, and some, like the Bay of Pigs invasion and the Vietnam War, failed disastrously. But Kennan was fundamentally right, and his ideas provided the North Star for Republican and Democratic presidents alike.
But when the Soviet Union imploded in 1991, policy makers were suddenly left without a conceptual blueprint for navigating global politics. In the place of containment, a gauzy optimism took hold. Major threats were considered passé: The end of history had arrived, and democracy had won. Declaring a “peace dividend,” policy makers slashed defense spending and cut the CIA’s workforce by 25 percent, hollowing out a generation just as a terrorist threat was emerging. In the post–Cold War decade, the United States focused its foreign policy on nation building, humanitarian assistance, and disaster relief. The Pentagon even created a new acronym for its operations: MOOTW, or “Military Operations Other Than War.” Nothing says strategic drift like focusing America’s warfighters on jobs other than the one they were hired and trained to do.
Today’s conceptual struggle is harder because the threats are more numerous, complex, varied, uncertain, and dynamic; because all of them are being supercharged by technological advances that will work in ways no one can fully fathom; and because two of the most widely discussed concepts so far have been force fits from a bygone era.
The notion of a new Cold War with China is all the rage. It’s a term that provides a strange sort of comfort—like seeing a long-ago friend at your college reunion—and yet no great insight. The U.S.-Soviet Cold War was driven primarily by ideology. The current competition with China is driven primarily by economics. And while the Cold War split the world into two opposing camps with almost no trade or meaningful contact between them, the key feature of today’s Sino-American rivalry isn’t division by an iron curtain but entanglement across global capital markets and supply chains.
Deterrence is another Cold War oldie-but-goodie. It sounds tough and smart, even though, in many circumstances, nobody is really sure how it could ever work. It has become a hazy, ill-formed shorthand policy that consists of “stopping bad guys from doing bad things without actually going to war, somehow.” Russian information warfare and election interference? Let’s get some deterrence for that. Iran? Maximum-pressure deterrence. Bashar al-Assad’s use of chemical weapons against his own people? Deterrence with clear red lines. China’s militarization of space? Cross-domain deterrence.
Deterrence isn’t a useless idea. But it’s not magic fairy dust, either. History shows that deterrence has only been useful under very specific conditions. In the Cold War, mutually assured destruction was very good at preventing one outcome: total nuclear war that could kill hundreds of millions of people. But nuclear deterrence did not prevent the Soviets’ other bad behavior, including invading Hungary, Czechoslovakia, and Afghanistan. The key Cold War takeaway isn’t that policy makers should use deterrence more. It’s that some things are not deterrable, no matter how much we wish them to be.
For all the talk of deterring cyberattacks, for example, the reality is that successful deterrence requires three conditions that are rarely all met in cyberspace: knowing the identity of the adversary, making clear what behavior you will not tolerate, and showing the punishment you could inflict if a Rubicon is crossed. But cyberattacks are frequently anonymous. No one knows who the bad guys are, at least not easily, so miscreants of all types can act with little fear of punishment. And there’s a reason no country conducts public cyberweapons tests or showcases its algorithms in military parades: Once a cyberweapon is revealed, it’s much easier for an adversary to take steps that render it useless, turn it against you, or both.
Using familiar ideas like the Cold War to understand new challenges is always tempting and sometimes deadly. Analogies and familiar concepts say, “Hey, it’s not so bad. We’ve been here before. Let’s consult the winning playbook.” But in a genuinely new moment, the old playbook won’t win, and policy makers won’t know it until it’s too late.
In today’s genuinely new moment, the biggest conceptual challenge is the profundity of paradox: Seemingly opposite foreign-policy dynamics exist at the same time.
Today, for instance, geography has never been more important—and less important. Sure, geography has always mattered. The Portuguese built an empire by claiming colonial territories along the maritime route Vasco da Gama discovered to reach India. But questions of who controls the physical landscape, and who lives in it, are now shaping global events in unpredictable ways and on an unprecedented scale. According to the UN, more than 70 million people were forced from their homes last year, the highest number on record. Of those, 25 million had to flee their home country, driven by violence or persecution. Separatist movements are stirring from northern Spain to the South Pacific, part of a secessionist trend that has intensified over the past century.
Meanwhile, global climate change is transforming the landscape itself. Australia is on fire, with flames already ravaging an area the size of West Virginia and choking millions of residents miles away with extreme air pollution. Experts predict that global warming will make massive fires more frequent in more places. Scientists also estimate that rising seas could threaten up to 340 million people living in low-lying coastal areas worldwide. All of these trends, along with old-fashioned territorial aggression (Russia in Ukraine, China in the South China Sea), are searing reminders that physical spaces and borders drawn across them still matter as much as they ever have.
At the same time, the virtual world has never been more global and seamless, with individuals and groups able to connect, transact, cooperate, and even wage wars across immense distances online. The percentage of the global population that is online has more than tripled since 2000. There is now Wi-Fi on Mount Everest, and Google’s parent company, Alphabet, promises to use balloons to bring the Internet to remote parts of Kenya. Facebook in 2019 drew 2.4 billion active monthly users—that’s a billion more people than the entire population of China. All of this connectivity makes it possible for Russian operatives to reach deep inside American communities and spread disinformation, influence what we believe, and tear us apart. Cybercapabilities also reportedly enabled Americans to sabotage North Korean rocket tests from thousands of miles away. Artificial intelligence is compressing time and distance—making it possible for information analysis and military decisions to move at machine speed. Even the borders between war and peace, combatant and civilian, are becoming increasingly blurred in cyberspace. In the old days, military mobilization took months and involved large logistics operations with heavy equipment that was hard to hide. In cyberspace, mobilization is literally at your fingertips.
In a related paradox, the United States is simultaneously the most powerful country in cyberspace and the most vulnerable country in cyberspace. This, too, is new. In the military’s traditional domains—air, land, and sea—countries with more capabilities were typically more powerful. Want to know who will “own the skies” in a conflict? The answer is easy: the side with better aircraft and air defenses. The Pentagon likes to talk about domain “dominance” because the term used to mean something. But it doesn’t in cyberspace. In the virtual world, power and vulnerability are inextricably linked.
As my Stanford cybercolleague Herb Lin has noted, connectivity is an important measure of strength and influence. From enterprise computing to industrial-control systems to the Fitbits on our wrists and video doorbells in our homes, information-technology-based systems are crucial for exploiting information to achieve greater efficiency, coordination, communication, and commerce.
But greater connectivity inescapably leads to greater vulnerability. The internet puts bad guys in distant locales just milliseconds away from the front door of a nation’s important information systems, such as those at power plants and major corporations. And as Lin notes, the more sophisticated our computer systems are, the more insecure they inevitably become. Increasing the functionality of any system increases the complexity of its design and implementation—and complexity is widely recognized as the enemy of security. “The reason is simple,” he told me. “A more complex system will inevitably have more security flaws that an adversary can exploit, and the adversary can take as long as is necessary to find them.”
Beyond recognizing the fact that seemingly paradoxical dynamics can exist at the same time—that digital technology multiplies America’s power and weaknesses; that physical geography is irrelevant and more laden with peril than ever—I don’t have a unified working theory for global affairs. That one has yet to develop is not surprising. But the effort is essential.
Containment and deterrence were bold and counterintuitive ideas when they were first formulated. Theorists of the mid-20th century, such as Kennan and Thomas Schelling, who articulated the theory of deterrence, started with one essential advantage: The atom bomb made it viscerally, horrifically clear just how much the coming world would be different from the past. It also drove home the point that the go-to ideas of yesteryear would not be up to the task of guiding American foreign policy in a new age. That point is no less urgent now.
NEXT STORY: On Iran, It's Time to Return to Containment