What is Red Teaming?

Simulated attacks help you analyze your attack surface, discover successful defense tactics, and remediate vulnerabilities.

Explore Vector Command

Red Team Definition

A Red Team is a group of security professionals that are tasked by an internal stakeholder or external customer to go beyond a penetration test and carry out an actual simulated attack on a target network – for as long as it takes to do so.

The ultimate goals of a Red Team attack are to both understand how an attacker would act when attempting to gain access to a network as well as to learn the attack surface’s current exposures and vulnerabilities. The United States Institute of Standards and Technology define a Red Team as:

“A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment.”

A Red Team attack simulation – or “red teaming” – should always be tailored to a security organization’s unique attack surface and take into account industry-specific threat levels.

Based on the security organization and the business it’s tasked with protecting, a Red Team attack will leverage a particular set of tactics, techniques, and procedures (TTPs) to breach a network and steal data. Thus it’s important for a security operations center (SOC) to become familiar with the TTPs used and learn how to defend against and/or overcome them.

Red Team Tools and Tactics

As discussed above, the format of the attack simulation carried out by a Red Team will look different for each organization. But a holistic way to describe the actual process, tools, and tactics would be that the Red Team provider works with their client to develop a customized attack execution model to properly emulate the threats the organization faces. 

The simulation should include real-world adversarial behaviors and TTPs, enabling the client's SOC to measure the security program's true effectiveness when faced with persistent and determined attackers. For one example of how a Red Team exercise can be carried out, let's look at this instance executed by the United States Cybersecurity and Infrastructure Security Agency. 

This particular Red Team began the process by engaging in two phases with the "target" organization. 

Adversary Emulation Phase

In this case, the Red Team's goal was to compromise the assessed organization's domain and identify attack paths to other networks by posing as a sophisticated nation-state actor. 

It simulated known initial access and post-exploitation TTPs, with the team then diversifying its tools to mimic a wider and often less sophisticated set of threat actors to elicit network defender attention. 

Collaboration Phase

The Red Team met regularly with the organization's security personnel to discuss it's defensive postures, during which it: 

  • Proposed new behavior-based and tool-agnositc detections to uncover additional attack signatures deployed during the previous phase. 
  • Refined existing detection steps to show how certain TTPs evaded detections that should have been flagged by existing Indicators-of-Compromise (IOCs)

Additionally, the following open-souce Red Teaming tools – while no replacement for a human team – are options for SOCs who may be facing budget or prioritization issues from the C-Suite:

  • APTSimulator: Batch script for Windows that makes it look as if a system were compromised. 
  • Atomic Red Team: Detections tests mapped to the MITRE ATT&CK framework.
  • AutoTTP: Automated tactics, techniques, and procedures. 
  • Caldera: Automated adversary emulation system by MITRE that performs post-compromise adversarial behavior within Windows networks. 
  • DumpsterFire: Cross-platform tool for building repeatable, time-delayed, distributed security events. 
  • Metta: Information security preparedness tool. 
  • Network Flight Simulator: Utility used to generate malicious network traffic and help teams evaluate network-based controls and overall visibility. 

Benefits of Red Teaming

There are many benefits to security testing of any kind, whether it’s an external consulting firm helping to ensure the strengths of network perimeter defenses or an internal team tasked with uncovering vulnerabilities in DevSecOps processes.

As pertains to the area of penetration testing – and Red Teaming testing in particular – let’s take a look at some of the more beneficial outcomes for the security organization and the business at large.

Identify and Prioritize Security Risks

Forrester found that undertaking Red Team security testing typically results in a 25% reduction in security incidents and a 35% reduction in the cost of security incidents. Needless to say, these reductions can have significant implications on the overall resilience and ROI of the security organization.

Target Only Necessary Upgrades

Instead of overhauling your security program due to, let’s say, a recent breach that caused significant damage and cost the company lots of money, testing scenarios like Red Teaming can help security organizations pinpoint exactly where they should upgrade and/or shore up defenses and training to prevent a similar or repeat attack.

Get a True Attacker View

One of the major reasons a business is on the defensive is due to the fact they simply haven’t taken the time to “step outside the perimeter” to see the organization the way an attacker would. Red Team simulations can provide the necessary data to finally obtain a well-rounded “inside/outside” view of how a SOC protects business operations. With this perspective in hand, security teams can adopt a stronger offensive and defensive posture and be ready for potential threats.

Red Teaming vs. Penetration Testing 

Penetration testing – also known as pentesting – services can be thought of as the umbrella under which Red Team, Blue Team, and Purple Team exercises sit. Opinions vary, but generally, pentesting is the more generic term used before security professionals get more specific in discussing Red Team attack simulations.

But there are some key differentiations between pentesting and Red Teaming. Pentesting is generally more upfront and visible; the client organization knows it’s happening. After the engagement is made official, Red Teaming activity is meant to be undercover and unknown to the target organization for as long as possible. Let’s take a look at this handy table for some additional distinctions:

CRITERIA PENTESTING RED TEAMING
Goal Vulnerability oversight Test resilience against attacks
Scope Defined subset of systems Attack paths used by threat actors
Controls Testing Preventive controls Detection and response controls
Testing Method Efficiency over realism Realistic simulation
Testing Techniques Map, scan, exploit TTPs of selected threat actors
Post-Exploitation Traditionally limited actions Focused on critical assets/functions

So, is one option better than the other? Often pentesters and Red Teamers are the same security professionals, using different methods and techniques for different assessments. The true answer is that one is not necessarily better than the other, rather each is useful in certain situations. 

Difference Between Red Team, Blue Team, and Purple Team

We've defined and discussed Red Teaming at length so far, so to distinguish the practice from the other color-labeled security exercises, let’s circle back to some basic definitions so we can gain a proper understanding of Red Team vs Blue Team vs Purple team (and yes, purple is a mix of red and blue colors, but the function of the team isn’t quite as simply explained):

  • Red Team: Stealthily tests an organization's defensive processes and coordination. 
  • Blue Team: Understands attacker TTPs and designs defenses accordingly. 
  • Purple Team: Enhances information sharing amongst Red and Blue teams and ensures both are cooperating. 

The biggest challenge to effective purple teaming is helping the blue and red teams overcome the competitiveness that can exist between them. Team Blue doesn’t want to give away how they catch bad guys, and Team Red doesn’t want to give away the secrets of the attack.

But, by breaking down these walls you can show the Blue Team how they can become better defenders by understanding how the Red Team operates. And you can show Red Team how they can enhance their effectiveness by expanding their knowledge of defensive operations in partnership with Blue Team.

Purple Teaming helps to enable a combined Red Team/Blue Team approach that empowers a security team to test controls while under a simulated, targeted attack. 

How to Build an Effective Red Team

It is, of course, not as simple as randomly assigning individual SOC staffers to a Red, Blue, or Purple Team. When attempting to build an effective Red Team, it’s critical to: 

  • Foster a culture of innovation: Attack paths and the methodologies by which attackers exploit them are changing and evolving all the time, thus Red Teamers must be encouraged to do the same.  
  • Define objectives: Whether the target is internal or an external customer network, objectives must be defined and agreed upon prior to kickoff. This can also aid in populating the Red Team with the right skill sets for the mission at hand. 
  • Acquire the right tools: Don't just throw every tool in the shed at the objective. If the job doesn't call for leveraging, for example, a threat intelligence tool, then don't spend the money. 
  • Adopt an attacker mentality: The job is no longer to protect the network, it’s to attack it. Every single person on the Red Team should adopt this attitude going into the mission. Because it’s doing a disservice to the customer – whether internal or external – if they don’t. 

Read More

Penetration Testing: Latest Rapid7 Blog Posts