Simulated attacks help you analyze your attack surface, discover successful defense tactics, and remediate vulnerabilities.
Explore Vector CommandA Red Team is a group of security professionals that are tasked by an internal stakeholder or external customer to go beyond a penetration test and carry out an actual simulated attack on a target network – for as long as it takes to do so.
The ultimate goals of a Red Team attack are to both understand how an attacker would act when attempting to gain access to a network as well as to learn the attack surface’s current exposures and vulnerabilities. The United States Institute of Standards and Technology define a Red Team as:
“A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment.”
A Red Team attack simulation – or “red teaming” – should always be tailored to a security organization’s unique attack surface and take into account industry-specific threat levels.
Based on the security organization and the business it’s tasked with protecting, a Red Team attack will leverage a particular set of tactics, techniques, and procedures (TTPs) to breach a network and steal data. Thus it’s important for a security operations center (SOC) to become familiar with the TTPs used and learn how to defend against and/or overcome them.
As discussed above, the format of the attack simulation carried out by a Red Team will look different for each organization. But a holistic way to describe the actual process, tools, and tactics would be that the Red Team provider works with their client to develop a customized attack execution model to properly emulate the threats the organization faces.
The simulation should include real-world adversarial behaviors and TTPs, enabling the client's SOC to measure the security program's true effectiveness when faced with persistent and determined attackers. For one example of how a Red Team exercise can be carried out, let's look at this instance executed by the United States Cybersecurity and Infrastructure Security Agency.
This particular Red Team began the process by engaging in two phases with the "target" organization.
In this case, the Red Team's goal was to compromise the assessed organization's domain and identify attack paths to other networks by posing as a sophisticated nation-state actor.
It simulated known initial access and post-exploitation TTPs, with the team then diversifying its tools to mimic a wider and often less sophisticated set of threat actors to elicit network defender attention.
The Red Team met regularly with the organization's security personnel to discuss it's defensive postures, during which it:
Additionally, the following open-souce Red Teaming tools – while no replacement for a human team – are options for SOCs who may be facing budget or prioritization issues from the C-Suite:
There are many benefits to security testing of any kind, whether it’s an external consulting firm helping to ensure the strengths of network perimeter defenses or an internal team tasked with uncovering vulnerabilities in DevSecOps processes.
As pertains to the area of penetration testing – and Red Teaming testing in particular – let’s take a look at some of the more beneficial outcomes for the security organization and the business at large.
Forrester found that undertaking Red Team security testing typically results in a 25% reduction in security incidents and a 35% reduction in the cost of security incidents. Needless to say, these reductions can have significant implications on the overall resilience and ROI of the security organization.
Instead of overhauling your security program due to, let’s say, a recent breach that caused significant damage and cost the company lots of money, testing scenarios like Red Teaming can help security organizations pinpoint exactly where they should upgrade and/or shore up defenses and training to prevent a similar or repeat attack.
One of the major reasons a business is on the defensive is due to the fact they simply haven’t taken the time to “step outside the perimeter” to see the organization the way an attacker would. Red Team simulations can provide the necessary data to finally obtain a well-rounded “inside/outside” view of how a SOC protects business operations. With this perspective in hand, security teams can adopt a stronger offensive and defensive posture and be ready for potential threats.
Penetration testing – also known as pentesting – services can be thought of as the umbrella under which Red Team, Blue Team, and Purple Team exercises sit. Opinions vary, but generally, pentesting is the more generic term used before security professionals get more specific in discussing Red Team attack simulations.
But there are some key differentiations between pentesting and Red Teaming. Pentesting is generally more upfront and visible; the client organization knows it’s happening. After the engagement is made official, Red Teaming activity is meant to be undercover and unknown to the target organization for as long as possible. Let’s take a look at this handy table for some additional distinctions:
CRITERIA | PENTESTING | RED TEAMING |
---|---|---|
Goal | Vulnerability oversight | Test resilience against attacks |
Scope | Defined subset of systems | Attack paths used by threat actors |
Controls Testing | Preventive controls | Detection and response controls |
Testing Method | Efficiency over realism | Realistic simulation |
Testing Techniques | Map, scan, exploit | TTPs of selected threat actors |
Post-Exploitation | Traditionally limited actions | Focused on critical assets/functions |
So, is one option better than the other? Often pentesters and Red Teamers are the same security professionals, using different methods and techniques for different assessments. The true answer is that one is not necessarily better than the other, rather each is useful in certain situations.
We've defined and discussed Red Teaming at length so far, so to distinguish the practice from the other color-labeled security exercises, let’s circle back to some basic definitions so we can gain a proper understanding of Red Team vs Blue Team vs Purple team (and yes, purple is a mix of red and blue colors, but the function of the team isn’t quite as simply explained):
The biggest challenge to effective purple teaming is helping the blue and red teams overcome the competitiveness that can exist between them. Team Blue doesn’t want to give away how they catch bad guys, and Team Red doesn’t want to give away the secrets of the attack.
But, by breaking down these walls you can show the Blue Team how they can become better defenders by understanding how the Red Team operates. And you can show Red Team how they can enhance their effectiveness by expanding their knowledge of defensive operations in partnership with Blue Team.
Purple Teaming helps to enable a combined Red Team/Blue Team approach that empowers a security team to test controls while under a simulated, targeted attack.
It is, of course, not as simple as randomly assigning individual SOC staffers to a Red, Blue, or Purple Team. When attempting to build an effective Red Team, it’s critical to:
Penetration Testing: Latest Rapid7 Blog Posts