A red team assessment is one of the highest risk/reward security operations you will ever be involved in.
Red teaming assessments happen in production environments with a relatively open scope, and your defenders, aka "the blue team," are not supposed to know red teaming is happening.
A flawed red team assessment can do real damage and put business continuity, your organisation's reputation and even your career at risk.
But when red teaming goes right, as it should, the payoffs can be huge.
With red teaming, you can learn how an advanced threat actor might compromise a payment system, gain access to and exfiltrate sensitive information, or hold your business to ransom.
This can give your security efforts a massive boost by showing you dangerous attack pathways. It can also demonstrate to non-technical decision-makers why more security investment might be needed.
To help you reduce red teaming risks and drive ROI, this blog post outlines six things that will feature in any successful red team assessment.
Attacks that happen in a red teaming exercise mimic realistic attacks that real threat actors have performed in the past. Before an exercise takes place, a red team will need to put together a "test plan", which will outline how they will conduct the attacks in a professional, risk-managed way whilst sticking to a fixed timeline.
When you contract a red team, you also need to have some essential prerequisites in place. These include a white team to manage the assessment, a blue team to test against, and a systemised security posture to attack and improve.
Beyond these red teaming and security basics, you might also want to conduct some preliminary exercises to test specific areas of your environment.
This can include malware resilience testing, where you take a workstation build and run test cases to see which of the existing applications and programs can be leveraged by potential threat actors and what traffic can originate from outside your network and potentially control the workspace.
Or it could mean doing Active Directory configuration reviews to analyse user management and access control to identify misconfigurations in the internal network.
Phishing simulations alongside the above can also be useful, especially when you can match phishing email open rates to malware deployment percentages.
These kinds of exercises, which an organisation like SECFORCE can manage for you, can give you a dry run of what happens during red teaming and test the types of reactions you're likely to get.
Red teaming is not a set-and-forget activity or service. You need to be prepared to flexibly manage a red teaming engagement to reduce risks and get value.
For example, after launching a wave of attacks, a red team might not be able to get into your network. This could be a significant finding, but it's not a reason to stop a red team assessment. In this case, you must be prepared to give the red team a "leg up."
It's part of the white team's job to keep the test moving, so in the above example, they could consider giving the red team the same access a credentialed user would have (i.e., an insider threat).
However, because a red team relies on secrecy (see the next point), any "leg-ups" must happen without alerting your defenders.
To allow for this kind of pivoting, you must build operational flexibility in advance and plan for these eventualities.
A typical red teaming assessment lasts around five weeks of testing, followed by one week of reporting.
During most of this time, it's critical not to tell anyone outside the white team (and your third-party service providers whose services might be involved) that a test is happening.
For a red teaming exercise to simulate an advanced threat actor accurately, the white team must only tell the blue squad that a red team exercise is underway only if it is critically necessary.
There is typically no need to disclose that a phishing attack or network entry attempt is taking place unless something extreme, like red teamers getting arrested (which actually happened in one case in 2020), occurs.
Generally, the only time it is necessary to inform the blue team that a red teaming exercise is happening is when something creates legal or operational risks.
The red team will eventually reveal itself to the blue team when they deem it necessary. This is usually towards the end of the assessment period but can happen sooner if a critical attack vector emerges or after consistent detection and loss of access for the red team.
Daily communication between the red team and your white team is one of the most effective risk-reduction actions you can take during a red team assessment.
You should expect daily briefings from your red team that focus on the actions taken during the previous day, the actions that will happen today, and the risks involved with these actions.
Risk management is critical here. Your white team needs to be fully informed about the red team’s activities. By reviewing planned actions, the white team can ensure that the red team is not about to do anything that might harm your organisation’s operational stability.
Your white team also needs to be contactable 24/7 by the red team during the assessment period and be able to manage situations like a red teamer compromising your CEO's workstation during an important client meeting.
A Narrative Report
If you've ever read a pen testing report, then a red teaming assessment report, with its different sections for technical and executive audiences, will be familiar.
What makes a red teaming assessment unique, however, is its description of an "attack narrative." It is essentially a story of how the red teaming assessment occurred, including a timeline from the red team's perspective.
This might read something like, “We started from an external perspective with phishing, got in, enumerated data, from data we impersonated an account, then jumped to an adjacent system, etc.”
In a red teaming report, each phase of the assessment will be covered in extensive detail alongside findings, roadblocks (detections/preventions), and screenshots.
The attack narrative is easy to follow, but at the same time, has all the necessary information for the reader.
A Trusted Red Teaming Vendor
Having the technical capabilities to perform the attacks of a threat actor is a key element which distinguishes good red teams from bad ones.
However, red teaming is ultimately a human-led process.
The people staffed on your test by your red team assessment provider also need to understand all of the above and be experienced in offensive security, certified to CREST standards and willing to do whatever it takes to deliver you value.