How We Think about Red Teaming
TL;DR: Red teaming means different things to different vendors. We discuss how SpecterOps defines it, why engagements start from assumed breach, and what organizations should expect to learn and take away.
The terminology problem: not all “red teams” are the same
Organizations today face adversaries that chain together misconfigurations, privileges, and identity exposures across hybrid environments that often move faster than defenses can adapt to them. In response, “red team” has become a catch-all term. Some vendors use it interchangeably with penetration testing. Others mean full adversary simulation. Others mean compliance-driven assessments with a different label. This ambiguity makes it difficult for buyers to know what they’re getting.
Here at SpecterOps, we think of red teaming as being rooted in the adversarial mindset, focused on challenging assumptions and improving detection and response capabilities by simulating realistic attack operations. It’s not just an assessment, but a training opportunity for the security team to observe, respond, and learn under realistic conditions. This definition has implications for how we structure engagements in practice, starting with where the assessment begins.
Why we start from assumed breach
Our approach to each red team engagement begins from the assumed breach perspective, intentionally focusing on post-compromise activity where detection and response capabilities are most critical. This is not a shortcut. It is a deliberate choice based on the reality that breaches are a matter of when, not if.
Starting from initial access (for example, through phishing or external exploitation) can consume a significant portion of an engagement without answering the most important questions. While these approaches may demonstrate that access is possible, they often come at the expense of understanding what happens after an attacker gains access. Initial access testing has its place and is something our teams will perform when validating external exposure is the primary goal. But the engagements that produce the most actionable insight start from a different question: what happens after an attacker gets in?
By beginning from an assumed breach, the engagement focuses on what actually determines impact: how effectively the organization can detect, contain, and respond to adversary behavior once they establish access. This puts the emphasis on the questions that matter most: would you detect them (and, if so, could you respond effectively)? This reflects a broader reality in security: preventing all initial access is not a realistic objective. As former NSA director Michael Hayden put it, “Fundamentally, if somebody wants to get in, they’re getting in…accept that.”
Why objectives matter more than scope
Our approach is to design engagements collaboratively with clients around defined objectives that reflect what matters most to the organization, such as access to sensitive data, critical systems, or high-value identities – focusing on objectives that are intentionally designed to demonstrate real impact; not just theoretical risk. This ensures the engagement answers meaningful questions, rather than simply identifying as many issues as possible.
Rather than open-ended exploration, this approach focuses effort on answering specific questions about detection and response capability and translating findings into business risk. Objectives typically center on business-critical operations or resources such as sensitive data, critical systems, or key identities, ensuring findings are tied to actual risk rather than theoretical vulnerabilities. This structure also highlights an important distinction: not all offensive assessments are designed to answer the same questions.
This objective-driven structure is also what makes the approach durable. As organizations deploy new technologies, like AI systems connected to business data, internal tooling, and cloud services, the attack surface expands. But the fundamental questions stay the same: what are your most critical assets, and could an attacker reach them without being detected? (For a deeper look at how this plays out in AI deployments specifically, see our post: AI Red Teaming Still Comes Back to Identity, Access, and Attack Paths.)
Red teaming vs. penetration testing: what’s actually different
Both services use similar adversary tradecraft and assumed breach methodology. The difference is in what’s being tested:
- Penetration testing focuses on identifying and exploiting attack paths. Evasion is removed as a variable to maximize efficiency in finding vulnerabilities.
- Red team engagements add evasion and operate without defender awareness. The goal shifts from, “Can this be exploited?” to, “Would you detect and respond to an attacker exploiting this?”
This distinction matters because it determines what you actually learn from the engagement. Penetration testing reveals exposure. Red teaming reveals whether your detection and response capabilities would catch an attacker pursuing that exposure. It also shows whether you can effectively contain and remove them once they’re in your environment under realistic conditions; not just in theory.
What you should get out of a red team engagement
We design our reports to demystify adversary tradecraft and provide evidence that drives improvement, not just document findings.
Each engagement includes an attack path narrative that shows how an adversary moved through the environment, what tools they used, supported by chronological timelines of activity, timestamps, and indicators of compromise. Findings focus on root cause that had a demonstratable impact, not just symptoms, and include the technical depth needed to support detection engineering and response improvement.
The goal is to provide actionable information that improves the security program, not a list of vulnerabilities without context. This way, teams can act on what matters and not just what an assessment team found.
Red teaming is part of continuous security improvement
Red team engagements are not point-in-time assessments. They are part of a continuous approach to security improvement, where organizations evaluate real-world risk, develop their capabilities, and validate progress over time.
But identifying gaps is only part of the equation. The ability to detect and respond effectively depends on how well teams can execute under pressure: something that can’t be built through assessment alone. In the next post, we’ll explore why practicing response before an incident occurs is critical to building that capability.
When red teaming is the right investment
Red team engagements are most valuable when organizations have established detection and response capabilities and want to understand how those capabilities perform against realistic adversary behavior.
Organizations still building foundational visibility, control coverage, or response processes may benefit more from penetration testing or purple team assessments, where the focus is on identifying exposure or validating controls before evaluating full detection and response performance.
Indicators that an organization may be ready for red teaming include having:
- A mature vulnerability management program
- Prior experience with penetration testing
- A security program that has already addressed common or easily exploitable weaknesses
At this stage, the focus shifts from finding basic issues to evaluating how effectively the organization can detect and respond to more complex attack scenarios.
Red teaming is only one part of understanding and improving detection and response capabilities. In the next blog, we’ll focus on response as a performance discipline, and how organizations can build confidence in their ability to detect and respond under real conditions.