After years of Kremlin efforts to derail international guidelines on militarized artificial intelligence, a national-security leader appeared to signal a new course.
Did the Russian military just concede that militarized artificial intelligence should be subject to international regulation?
For several years, Russia has helped derail UN-sponsored attempts to hammer out global guidelines concerning lethal autonomous weapons systems, or LAWS. But on Wednesday, a top Russian security official appeared to reverse course.
“We believe that it is necessary to activate the powers of the global community, chiefly at the UN venue, as quickly as possible to develop a comprehensive regulatory framework that would prevent the use of the specified [new] technologies for undermining national and international security,” Russian Security Council Secretary Nikolai Patrushev said on Wednesday at an annual international-security conference in Moscow, according to state media. “Modern technologies make it possible to create attack instruments with the use of artificial intelligence, genetics, and synthetic biological agents—they are often as deadly as weapons of mass destruction.”
Such sentiment coming from the Russian military is rather surprising. Starting in 2017, Moscow’s position on LAWS has been fairly consistent: the country agrees with the international consensus that humans must maintain control of them, agrees to continue talking about regulating their use, but opposes international limits on their development.
Last year, Russian officials sharpened their focus on militarized AI. In January, Russian Ministry of Defense officials announced their general intentions to develop artificial intelligence for military use. In March, then-Deputy Defense Minister Yuri Borisov underscored the Russian military’s determination to harness AI for use in information operations and in cyberspace. Two months later, President Vladimir Putin said that it is necessary to focus on the introduction of artificial intelligence and robotics in weapons production. And the ensuing months have brought many announcements on the use of AI in various weapons. All this makes Patrushev’s statement all the more surprising.
Even Russia’s AI-fascinated civilian high-tech community has generally pooh-poohed the notion of international limits on AI research and application. At a recent forum held by Izvestia.ru news organization, Kribum president Igor Ashmanov criticized recent efforts by the European Commission to developed ethics standards for AI. “Should there be any legal framework for AI?” Ashmanov said. “Yes, but it is not clear what could be the subject of regulation here. This is a very serious section of law, which has not yet been formed.” And he expressed pessimism that militarized AI could be stopped: “All the armies of the world are engaging in artificial intelligence development—the main ‘progress’ will be there.”
Artem Kiryanov, a member of the legislative Civic Chamber of the Russian Federation, was even more dismissive. The European Commission “has very large budgets for the maintenance of employees, which must be constantly used up. So they are busy” writing laws, he said. “Fortunately, the documents of the European Commission have nothing to do with us. I really hope that Russian officials will not be inspired to write something like that.”
Kiryanov appeared to soft-pedal the potential impact of AI (“no machine can replace a human”) and to critique international efforts to establish ethical norms. “There is no need to regulate [technological] progress,” he said. “Today’s jurisprudence [already] serves the key norms that have been established in the society at the moment.”
So what do we make of Patrushev’s statement? Last year, the Defense Ministry worked with the Russian Academy of Sciences to host a major conference intended to help gauge how AI is developing in Russia. Officials seemed interested in standards for AI as a component of a 10-part roadmap. Further official statements bear close watching.