Last updated at Fri, 30 Aug 2024 20:42:07 GMT
When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment. It's an effective approach: an analytics engine that triggers based on known attack methods as well as users straying from their normal behavior results in high fidelity detection. Our conversations center on technical features and objections – how can we detect lateral movement, or what does the endpoint agent do, and how can we manage it? That's the nature of technical sales, I suppose. I'm the sales engineer, and the analysts and engineers that I'm speaking with want to know how our stuff works. The content can be complex at times, but the nature of the conversation is simple.
An important conversation that is not so simple, and that I don't have often enough, is a discussion on privacy and IDR. Privacy is a sensitive subject in general, and over the last 15 years (or more), the security community has drawn battle lines between privacy and security. I'd like to talk about the very real privacy concerns that organizations have when it comes to the data collection and behavioral analysis that is the backbone of any IDR program.
Let's start by listing off some of the things that make employers and employees leery about incident detection and response.
- It requires collecting virtually everything about an environment. That means which systems users access and how often, which links they visit, interconnections between different users and systems, where in the world users log in from – and so forth. For certain solutions, this can extend to recording screen actions and messages between employees.
- Behavioral analysis means that something is always “watching,” regardless of the activity.
- A person needs to be able to access this data, and sift through it relatively unrestricted.
I've framed these bullets in an intentionally negative light to emphasize the concerns. In each case, the entity that either creates or owns the data does not have total control or doesn't know what's happening to the data. These are many of the same concerns privacy advocates have when large-scale government data collection and analysis comes up. Disputes regarding the utility of collection and analysis are rare. The focus is on what else could happen with the data, and the host of potential abuses and misuses available. I do not dispute these concerns – but I contend that they are much more easily managed in a private organization. Let's recast the bullets above into questions an organization needs to answer.
Which parts of the organization will have access to this system?
Consider first the collection of data from across an enterprise. For an effective IDR program, we want to pull authentication logs (centrally and from endpoints – don't forget those local users!), DNS logs, DHCP logs, firewall logs, VPN, proxy, and on and on. We use this information to profile “normal” for different users and assets, and then call out the aberrations. If I log into my workstation at 8:05 AM each morning and immediately jump over to ESPN to check on my fantasy baseball team (all strictly hypothetical, of course), we'll be able to see that in the data we're collecting.
It's easy to see how this makes employees uneasy. Security can see everything we're doing, and that's none of their business! I agree with this sentiment. However, taking a magnifying glass to typical user behavior, such as websites visited or messages sent isn't the most useful data for the security team. It might be interesting to a human resources department, but this is where checks and balances need to start. An information security team looking to bring in real IDR capabilities needs to take a long and hard look at its internal policies and decide what to do with information on user behavior. If I were running a program, I would make a big point of keeping this data restricted to security and out of the hands of HR. It's not personal, HR – there's just no benefit to allowing witch hunts to happen. It'll distract from the real job of security and alienate employees. One of the best alerting mechanisms in every organization isn't technology, it's the employees. If they think that every time they report something it's going to put a magnifying glass on every inane action they take on their computer, they're likely to stop speaking up when weird stuff happens. Security gets worse when we start using data collected for IDR purposes for non-IDR use cases.
Who specifically will have access, to what information, and how will that be controlled?
What about people needing unfettered access to all of this data? For starters, it's absolutely true. When Bad Things™ are detected, at some point a human is going to have to get into the data, confirm it, and then start to look at more data to begin the response. Consider the privacy implications, though; what is to stop a person from arbitrarily looking at whatever they want, whenever they want, from this system?
The truth is organizations deal with this sort of thing every day anyway. Controlling access to data is a core function of many security teams already, and it's not technology that makes these decisions. Security teams, in concert with the many and varied business units they serve, need to decide who has access to all of this data and, more importantly, regularly re-evaluate that level of access. This is a great place for a risk or privacy officer to step in and act as a check as well. I would not treat access into this system any differently than other systems. Build policy, follow it, and amend regularly.
Back to if I was running this program. I would borrow pretty heavily from successful vulnerability management exception handling processes. Let's say there's a vulnerability in your environment that you can't remediate, because a business critical system relies on it. In this case, we would put an exception in for the vulnerability. We justify the exception with a reason, place a compensating control around it, get management sign off, and tag an expiration date so it isn't ignored forever. Treat access into this system as an “exception,” documenting who is getting access, why, and define a period in which access will be either re-evaluated or expire, forcing the conversation again. An authority outside of security, such as a risk or privacy officer, should sign off on the process and individual access.
Under what circumstances will this system be accessed, and what are the consequences for abusing that access?
There need to be well-defined consequences for those that violate the rules and policies set forth around a good incident detection and response system. In the same way that security shouldn't allow HR to perform witch hunts unrelated to security, the security team shouldn't go on fishing trips (only phishing and hunts). Trawls through data need to be justified. This is for the same reasons as the HR case. Alienating our users hurts everyone in the long run.
Reasonable people are going to disagree over what is acceptable and what is not, and may even disagree with themselves. One Rapid7 customer I spoke with talked about using an analytics tool to track down a relatively basic financial scam going on in their email system. They were clearly justified in both extracting the data and further investigating that user's activity inside the company. “In an enterprise,” they said, “I think there should be no reasonable expectation of privacy – so any privacy granted is a gift. Govern yourself accordingly.”
Of course, not every organization will have this attitude. The important thing here is to draw a distinct line for day to day use, and note what constitutes justification for crossing that line. That information should be documented and be made readily available, not just in a policy that employees have to accept but never read. Take the time to have the conversation and engage with users. This is a great way to generate goodwill and hear out common objections before a crisis comes up, rather than in the middle of one or after.
Despite the above practitioner's attitude towards privacy in an enterprise, they were torn. “I don't like someone else having the ability to look at what I'm doing, simply because they want to.” If we, the security practitioners, have a problem with this, so do our users. Let's govern ourselves accordingly.
Technology based upon data collection and analysis, like user behavior analytics, is powerful and enables security teams to quickly investigate and act on attackers. The security versus privacy battle lines often get drawn here, but that's not a new battle and there are plenty of ways to address concerns without going to war. Restrict the use of tools to security, track and control who has access, and make sure the user population understands the purpose and rules that will govern the technology. A security organization that is transparent in its actions and receptive to feedback will find its work to be much easier.