Want to Plug Intel Leaks? Let Technology Find the Next Insider Threat

Former National Security Agency contractor Edward Snowden speaks via video conference to Johns Hopkins University, in Baltimore, Feb, 17, 2016.

AP Photo/Juliet Linderman, File

AA Font size + Print

Former National Security Agency contractor Edward Snowden speaks via video conference to Johns Hopkins University, in Baltimore, Feb, 17, 2016.

Lack of empathy? High narcissism? Self-absorbed? You could be the next Snowden.

The latest leak of classified documents via Wikileaks appear to have come from inside CIA, demonstrating the difficulty of protecting sensitive information even from those who are trusted to access it. Roughly 3 million people are cleared to access classified information. However, the background investigations used to grant security clearances rely on a behavioral model developed during World War II. It’s time for a new way.

With 21st century technology, investigators can do better, evaluating conduct and communications for signs of personality traits typical of at-risk employees. With current technology, they might have flagged Edward Snowden and Chelsea Manning before they had a chance to cause serious damage to national security. It is a more accurate way of predicting misconduct than the cumbersome, time-consuming, and costly process by which the government currently grants clearances.

An initially loyal employee does not suddenly transform into a malicious insider. The path to a significant destructive act is marked by small infractions that grow in response to mounting personal and professional stress. Employees who engage in one type of counterproductive behavior will often engage in others. Minor misdeeds can escalate into severe transgressions.

Organizations must implement ways to monitor and evaluate employees continually. Advanced monitoring tools that identify life stressors, strong emotions, and atypical behavior can provide early warning of potential misconduct or spot small-scale malicious acts before they become something more sinister. It’s the same technology developed by retailers to analyze customers’ social media posts and micro-target their marketing to match individual shoppers’ preferences. Software that analyzes a consumer’s sentiments about a ski jacket can also assess an intelligence officer’s frustration with his job.

For example, in online chat room postings, Chelsea Manning frequently used the word “lose” in close proximity to the words “job” and “career” – textual relationships that linguistics software could flag. A subsequent review of the messages’ content could have revealed personal despair that led Manning to feel she had little to lose from destructive behavior.

Similarly, personality-mapping tools use psycholinguistic analysis to identify personality traits that may predispose an employee to commit destructive acts. Snowden’s postings to the online forum Ars Technica indicate a lack of empathy and trust, a high degree of narcissism and self-absorption, and a limited willingness to compromise – traits that suggest a tendency to justify and act on one’s beliefs regardless of the impact on others.

Data on an employee’s non-work activities – such as arrest records, court records, and credit bureau reports – can also reveal concerning behavior. An employee who was recently arrested or applied for multiple bank loans in a short period could have personal or financial problems that increase the risk of destructive behavior. Such records are reviewed as part of security clearance investigations, but cleared personnel are only reinvestigated every five years.

Not everyone whose wife leaves him or is unhappy at work turns into a malicious insider. Although security staff can draw on data analysis to identify insider threats, they also need help from human resources. Software can find and flag language or behaviors of potential concern, but a human is needed to assess an employee’s actions in the context of that person’s life. Coworkers and managers often note changes in attitude or minor counterproductive behaviors that may indicate an employee is struggling. Organizational leadership and human capital personnel must seek to establish a healthy environment that encourages employees to share concerns about coworkers so management can assess if an employee appears to be struggling and intervene to help, if necessary. 

If leaders want employees to be supportive, they must communicate why such intrusive programs are needed to protect the organization’s information, investments, reputation, and workforce. Make it clear that the point of insider threat programs is to help resolve problems rather than punish employees. If autonomous monitoring identifies a staff member with personal problems or financial difficulties that places him or her at risk, the organization can refer that person to counseling or to financial advisors to help work through these challenges, or it can mitigate the risk by moving the employee to a less stressful position. Only if employees believe that monitoring is designed to protect the health, safety, and well-being of the organization and its workforce can an insider program avoid alienating the very people it is designed to protect.

No single technology or technique will be a panacea. Through carefully designed programs that involve technology, human resources, comprehensive security policies, and effective leadership, government agencies and private companies can mitigate insider threat risks in ways that preserve employee privacy and assist at-risk employees before they can do damage. It may prevent the next Edward Snowden – a development that would benefit both the country and the individual who is diverted from a destructive path.

Daniel McGarvey is a counterintelligence expert at Alion Science and Technology. He is a principal author of a new report, “Assessing the Mind of the Malicious Insider,” which was released by the Intelligence and National Security Alliance, or INSA.

Close [ x ] More from DefenseOne