Insider threats often go undetected in high-security organizations, Stanford scholar finds

Stanford political scientist Scott Sagan says the evidence shows that while insider threats may be low-probability events on a day-to-day basis, over time they have a high probability of occurring.  

For companies, an insider threat from an employee can be an economic disaster. For a government unit, an insider threat can quickly become a dangerous national security issue.

Illustration of anonymous hacker

Challenges that high-security organizations face in protecting themselves from employees who might betray them are analyzed in a new book co-edited by Stanford Professor Scott Sagan. (Image credit: Imaginima / Getty Images)

An insider threat comes from within an organization – employees, former employees, contractors or business associates – who have information that could compromise an organization’s security practices, data and computer systems. Examples include Edward Snowden, Chelsea Manning, and the sources of many of the Wikileaks revelations, among others, that have emerged in our internet-driven world.

To better understand this rising phenomenon, Scott Sagan, a Stanford political science professor and senior fellow at the Center for International Security and Cooperation, analyzes the challenges that high-security organizations face in protecting themselves from employees who might betray them. He recently co-edited a new book on the topic, Insider Threats, with Matthew Bunn, a professor of practice at Harvard University.

The Stanford News Service interviewed Sagan about these threats:

How common are insider threats in critical national security organizations, like U.S. military services, intelligence agencies and nuclear laboratories?

Unfortunately, they are ubiquitous. Insider threats in the critical national security organizations are classic “low-probability, high-consequence events.” They are low-probability events because members of national security organizations are, for the most part, trustworthy and loyal. But they are also human and subject to competing pressures, changing loyalties and coercion. In this area, the evidence shows that what are low-probability events on a day-to-day basis occur with high probability over time.

Each of the U.S. military services has suffered, as we demonstrate in Insider Threats, from having a spy, a dangerous leaker of secrets or a terrorist within the ranks. The CIA, the NSA and the FBI have all had serious insider-threat incidents leading to major damage to U.S. national security.

And both Los Alamos and Lawrence Livermore labs have had serious cases of foreign intelligence agencies penetrating their security systems to get insiders to provide classified information about nuclear weapons.

National security organizations must not focus so much on the very real external threats we face that they ignore the very real insider threats we face as well.

What common mistakes do national security agencies make when trying to deal with insider threats?

There are many common patterns of mistakes that Matthew Bunn from Harvard and I discuss in the book, so many mistakes that we (with tongues firmly in cheek) titled the last chapter, “A Worst Practice Guide to Insider Threats.”

One common error is to assume that background checks solve the problem. They do not. We know from experience that even the best background checks or loyalty and stability monitoring systems are not 100 percent reliable. And because people change over time, background checks do not really provide measures of an individual’s character, only his or her current state of mind. They are like snapshots, not portraits, of members of national security organizations.

At a more macro level, we have a serious problem of having too much classified information in the United States and, therefore, too many people with security clearances.

In 2014, the Office of Management and Budget reported that 5.1 million Americans have security clearances. That is 1.5 percent of the population. The numbers have reportedly been reduced a bit since then, but there is an unfortunate tendency – in the wake of serious leaks or spying incidents – to classify more information in the name of improved security. But this can backfire. When more individuals need more security clearances, less reliable security vetting procedures are often implemented and more bad apples can fall through the cracks.

We need to protect our nation’s serious secrets more securely and not just classify more and more information in the false hope of preventing all insider threats.

What was the most surprising finding in the book?

For me, it was how often individuals are able to ignore insider threats in their midst even when “red flags” are waving in their faces.

We quite naturally assume that soldiers, intelligence officers, laboratory and government employees are loyal and responsible individuals, because the vast majority are. But what is stunning is the creativity with which we can reinterpret strong evidence to the contrary to fit our assumptions.

The most compelling and tragic example comes from CISAC co-director Amy Zegart’s stellar chapter on the 2009 Fort Hood shooting incident. She notes that the FBI agent monitoring Major Nidal Hassan learned that he was discussing whether it was permissible to kill fellow soldiers with a jihadi cleric via email, and decided that Hassan was just doing “research.” This was not long before Hassan opened fire on his fellow soldiers at Fort Hood, killing 13 individuals and wounding more than 30 others. This was a preventable insider attack.

Scott Sagan is the Caroline S.G. Munro Professor of Political Science and the Mimi and Peter Haas University Fellow in Undergraduate Education.

Media Contacts

Scott Sagan, Center for International Security and Cooperation: (650) 725-2715, ssagan@stanford.edu

Clifton B. Parker, Center for International Security and Cooperation: (650) 725-6488, cbparker@stanford.edu