National Intelligence Director James Clapper, at a February 11 Senate Armed Services Committee hearing, asserted (again) that malevolent insiders with access to top secret material, like Edward Snowden, constituted a top threat to our nation’s national security. The lawmakers agreed and pressed Clapper to explain how he was changing the practices within his office and across the intelligence community to prevent another Snowden-scale data breach. One key step that Clapper outlined: our nation’s top intelligence folks will become subject to much more surveillance in the future.
Clapper said he wanted to put more intelligence community communication into a single, massive (enterprise-sized) cloud environment in order to, as he described it, “take advantage of cloud computing and the necessary security enhancements” therein. There are plenty of good reasons for any department head to want that, but chief among them for Clapper is that moving to the cloud will allow monitors to better “tag the data, [and] tag the people, so that you can monitor where the data is and who has access to it on a real-time basis.”
Anticipating insider threat behavior is a problem that governments have been wrestling with since the first act of state treason. But the current round of research within the United States goes back before Snowden to Army Pfc. Bradley (now Chelsea) Manning’s 2010 arrest for passing top-secret files to Wikileaks. Manning’s disclosure prompted President Obama to issue Executive Order 13587, mandating the creation of an insider threat task force.
Mark Nehmer, associate deputy director of cybersecurity and counterintelligence for the Defense Department, said that a possible insider threat signal could include anything from a change in marital status to a trip abroad to unusual online activity. One or two of these signals in isolation don’t serve effectively as a red flag, but observed in the context of one another, patterns can emerge.
“Think of statistics and human behavior and think about correlating past and future behavior, that’s the future of insider threat, I believe,” he said, at Nextgov’s Cybersecurity Series in Washington on Tuesday.
Nehmer and several colleagues have offered DOD various recommendations for curing the threat of an insider attack. These include ensuring that more people with top secret clearance have at least one person sign off on work assignments involving sensitive information; stricter punishments for minor infractions involving data loss, glitches and “spillage;” mandating that all software fixes comply with a single new standard; and the creation of a joint information environment (JIE) allowing all of the services to share information in one secure cloud setting and far more effective monitoring of employee communication and activity.
“We have all these titanium silos of excellence and we replicate all these services and people. That’s not getting us very far,” Nehmer said, regarding the importance of the JIE. “We need to build an architecture so that a whole department can use enterprise services.” The Pentagon already has a JIE in place for email, said Nehmer. This will be extended across other military branches soon.
The question becomes, what are the Snowden-like signals to watch for in this new, more transparent environment?
Few people involved in insider threat programs in Washington are eager to talk about what makes a potential traitor conspicuous, but several interesting findings have been published out of Palo Alto, California.
Oliver Brdiczka, a researcher at PARC, and several of his colleagues have set up a number of experiments to observe potential insider threat behavior in closed online environments. In the first of these [PDF], Brdiczka looked at the massively multiplayer online game World of Warcraft. The game, which allows users to build characters, join large organizations called guilds, and go on missions and assignments, has been in the news a bit recently after the Snowden leaks revealed that the NSA had been listening in on chat room conversations between World of Warcraft players in the hopes of catching potential terrorists.
Brdiczka and his colleagues were after a more ambitious prize — a scientific understanding of how insider threats actually develop in realtime. Players hunting dragons and orcs wind up collaborating with team mates, applying for positions and earning rewards in somewhat the same way that work teams go about attacking big projects. The game thus served as a suitable proxy for a real world work environment. A player who quits her guild has the potential to damage it, perhaps even absconding with goods in much the same way that Edward Snowden defected with flashdrives of classified information. In Brdiczka’s experiment, quitting served as a useful stand in for insider-threat behavior.
The researchers found volunteers, looked at each subject’s social network presence, and made each fill out a personality survey. They then carefully observed how the players approached the game play, how they acquired items, fought monsters, interacted with one another and performed dozens of other tasks. Result: The researchers found that they could predict who was going to quit in six months with an accuracy rate of 89 percent.
Shortly after the test, Brdiczka and his colleagues expanded the research [PDF] to the real world. They looked to determine if email patterns could predict quitting (attrition) and began by examining two data sets, a small company of 43 employees and a large company of 3,600, for a period of about 20 weeks. They measured everything from the frequency of email to the time of day it was sent, to whether the email had attachments or came as a forward. They even taught a computer program to categorize the tone in the messages as being positive or negative. In the end, the results of the experiment were a bit less conclusive than the World of Warcraft study. They were able to predict quitting with about 60 percent accuracy.
But they did find some important clues that can predict potential insider threat behavior, and they were counter-intuitive. The team had expected that the strongest signal of a quitting event to be emails with a highly negative tone, full of spit and spite. In fact, the best attrition symptom was fewer emails, fewer messages after hours, fewer attachments, fewer words all together.
The Snowden in your office is the guy going dark.
Brdiczka’s work is currently being funded by a grant from the Defense Advanced Research Projects Agency, or DARPA. The goal of the Anomaly Detection at Multiple Scales, or ADAMS, program is to “create, adapt and apply technology to the problem of anomaly characterization and detection in massive data sets…The focus is on malevolent insiders that started out as ‘good guys.’ The specific goal of ADAMS is to detect anomalous behaviors before or shortly after they turn.”
Of course, polls indicate public ambivalence as to whether Edward Snowden is a malevolent insider, a “good guy,” or something else entirely. Also, varying bodies have differing definitions of what constitutes an insider in a military context. From a purely technological perspective, these aren’t critical points to the functioning of an insider threat computer model. Brdiczka told me that, with some small modification to account for different feature sets, the model could scale up to apply to virtually any domain where online social interaction can be observed and measured. That includes the JIE that the Pentagon wants to build across all service branches, or, for that matter, all of Facebook.
Congratulations. You’re an insider now.