assessments

Feb 21, 2018 | 15:54 GMT

7 mins read

Mitigating the Insider Cyberthreat: Tools to Defend Against Internal Data Breaches

The internet has greatly increased the risk of cyberthreats from within.
(KIRILL KUDRYAVTSEV/AFP/Getty Images)
Analysis Highlights
  • The first steps involve classifying an organization's data and then controlling who can access what based on their need to know.
  • Blocking common channels used to leak data is the next step.
  • The final step involves improving visibility and alerting by setting up in-depth logging of most sensitive classes of data, and alert systems when users try to access that data.

Editor's Note: This report was produced and originally published by Threat Lens, Stratfor's unique protective intelligence product. Designed with corporate security leaders in mind, Threat Lens enables industry professionals to anticipate, identify, measure and mitigate emerging threats to people and assets around the world.

The rise of the internet has greatly increased the risk of cyberthreats from within. Whereas stealing data before might once have involved photocopying reams of pages, a bad inside actor now can make off with vast troves of digital data with just a few clicks. The threat of data exfiltration by malicious insiders keeps many leaders up at night. But there are practical techniques for reducing the risk of insiders exfiltrating data. Four sets of controls are relevant to this task: Classify, Control, Block and Visibility.

Classify Data

The first step is to classify an organization's data in some way. There are many different techniques to do this; how it is done will depend on the industry and organization. The key is that the classification must be practical.

Control Access

The second major mitigation step is to control access to data. The principle of least privilege is the bedrock upon which this control is built: Members of an organization should be given the least amount of access necessary. How much access to, and what classification of, data employees receive should be based on their role. Just because users are high-level employees does not mean they should automatically get access to all sensitive data. While not easy to implement, the determination of what data they have access to should instead be made based on their specific duties, not their seniority alone.

Shared accounts are a common issue with control access. Though they are implemented for ease of use, they remove accountability and reduce the likelihood of early detection of malicious activity. Each user should have unique credentials to each system in the environment. In fact, most security standards like the payment card industry's data security standards require it.

Overpowered service accounts are another issue. Service accounts are not used by people, but by software in the environment. Many commercial software vendors request a very high level of access in the environment. In reality, their software probably does not require this level of access, but they are instead overcompensating to reduce support calls that can crop up from not having the right amount of access. Since malicious insiders can use overpowered service accounts to gain access to sensitive data, when implementing software, one should pay special attention to what level of access the software actually requires.

Block Common Exfiltration Channels

Blocking common exfiltration channels is next on the list. Data loss prevention products fit into this area and make it much easier to manage these controls as a cohesive set of policies rather than individually managing each component. This is especially true in complex environments, where exceptions for certain users or groups are required. Manually managing these exceptions across disparate technology stacks can be highly time consuming and complex, increasing the risk of errors.

External Media: Technical controls should be put into place to block the use of external storage connected by USB, FireWire and any other hardware connectors in use in the environment.

Network and Web: Personal email, file sharing and social media should be closely monitored or outright blocked. A proxy at the edge of the network can be used to look for suspicious connections that may indicate exfiltration. Web traffic, however, has continued to become more encrypted, stymieing typical proxies. To view this encrypted traffic, a secured sockets layer (SSL) proxy can be used. An SSL proxy is deployed at the perimeter of the corporate network just like a normal proxy. If it detects an encrypted web connection, it decrypts it, inspects the content, and then re-encrypts it and sends it on its way. This type of proxy requires a higher level of expertise to implement, and raises obvious privacy considerations.

Outright blocking of cloud storage websites like Dropbox or Google Drive can be problematic, since they are legitimate tools that many organizations use. In this case, a more nuanced approach is required, which may include per-user or -group white-listing, whereby some users in the company are approved if there is a good reason to do so. In addition, baselining network traffic to and from these web applications will allow alerts to be fired when network traffic is outside the norm.

Email: The organizational email system should be set up to warn and/or block when certain types of data are attached to an email. This would include, for example, blocking credit card primary account numbers from being sent.

Visibility and Alerting

The final set of controls involves visibility and alerting. Good visibility is an essential part of preventing exfiltration. This does not mean that all logs for all devices and applications in the environment should be enabled. Doing so would only cause chaos as information technology security infrastructure was buried under the load of so many logs. Forethought and insight must be brought to bear, since each new log source that is enabled incurs a very real man-hour cost to sift through.

One way to work through this is to think through the life cycle of the different classes of data, starting with the most sensitive. Where is it created, modified and stored? How is it transferred from one location to another? For each of the systems involved, confirm that the appropriate level of logging has been enabled. In-depth logging should be enabled for the most sensitive classes of data, while basic logging should be fine for the least sensitive. All of these logs are the pieces that will be used to assemble a picture of what happened in case of a breach.

Another technique within this control is to set up "honeytokens" within the environment. Honeytokens are fabricated pieces of data that look appealing to a malicious entity. Once they have been interacted with, they will send an alert to the security team. One way to do this would be to create a Word document titled "network passwords," fill it with random phrases and place it on the organizational filestore. Next, configure the filestore to log an alert when a user reads the file. Though we described this as a manual process, free and commercial tools exist that allow the generation of honeytokens and automate the alerting. Honeytokens can take the form of files (Word, PDFs), URLs, unique email addresses and more.

Obstacles to Implementation

Though these controls are mostly technical in nature, it will most likely not be technical issues that block implementation. Instead, the hardest part will likely be changing organizational culture, like scoping access levels to the needs of roles, not the level of seniority. As is most often the case, we humans are the weakest link in the cybersecurity chain.

And as in physical security, cybersecurity teams cannot be expected to watch every component all the time — this would require far too many resources and risk overreach on the part of any security department. Instead, security departments can rely on threat intelligence to focus their efforts on watching particular problem spots. Thus, an employee behaving suspiciously or a particularly sensitive research and development project warrants greater attention, leaving automated processes to monitor other areas absent a specific threat. In order to zero in on potential problem areas, the tools and systems benchmarks outlined above need to be in place well ahead of time.

Stratfor Global Fellow Scot Terban contributed to the preparation of this analysis.

 

Connected Content

Article Search

Copyright © Stratfor Enterprises, LLC. All rights reserved.

Stratfor Worldview

OUR COMMITMENT

To empower members to confidently understand and navigate a continuously changing and complex global environment.

GET THE MOBILE APPGoogle Play