Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Practical Threat Detection Engineering

You're reading from   Practical Threat Detection Engineering A hands-on guide to planning, developing, and validating detection capabilities

Arrow left icon
Product type Paperback
Published in Jul 2023
Publisher Packt
ISBN-13 9781801076715
Length 328 pages
Edition 1st Edition
Arrow right icon
Authors (3):
Arrow left icon
Megan Roddie Megan Roddie
Author Profile Icon Megan Roddie
Megan Roddie
Jason Deyalsingh Jason Deyalsingh
Author Profile Icon Jason Deyalsingh
Jason Deyalsingh
Gary J. Katz Gary J. Katz
Author Profile Icon Gary J. Katz
Gary J. Katz
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1: Introduction to Detection Engineering
2. Chapter 1: Fundamentals of Detection Engineering FREE CHAPTER 3. Chapter 2: The Detection Engineering Life Cycle 4. Chapter 3: Building a Detection Engineering Test Lab 5. Part 2: Detection Creation
6. Chapter 4: Detection Data Sources 7. Chapter 5: Investigating Detection Requirements 8. Chapter 6: Developing Detections Using Indicators of Compromise 9. Chapter 7: Developing Detections Using Behavioral Indicators 10. Chapter 8: Documentation and Detection Pipelines 11. Part 3: Detection Validation
12. Chapter 9: Detection Validation 13. Chapter 10: Leveraging Threat Intelligence 14. Part 4: Metrics and Management
15. Chapter 11: Performance Management 16. Part 5: Detection Engineering as a Career
17. Chapter 12: Career Guidance for Detection Engineers 18. Index 19. Other Books You May Enjoy

Triaging detection requirements

In this section, we’ll discuss the steps that should be taken and the criteria to be considered when prioritizing requirements. Triage is an important phase of the detection engineering lifecycle because not all detection requirements will have the same impact on the organization’s defenses, so it is important that we prioritize our efforts toward those that will provide the most value. If engineers are provided an unprioritized list of detection requirements, you risk missing the requirements that may prevent a major attack because everyone is working on what they feel like rather than what is best for the organization.

There are four criteria we mentioned in Chapter 2 as factors when triaging requirements:

  • Threat Severity
  • Organizational Alignment
  • Detection Coverage
  • Active Exploits

For each detection requirement that comes in, we need to evaluate how it is affected by the above four factors in order to determine the return on investment (ROI) of implementing a detection for the requirement. To simplify converting a triage assessment to a ticket priority, we are going to implement a simple scoring scale for each category and sum those up for a cumulative score that can be used to determine which tickets should be worked on first. First, let’s introduce the scoring scale of each category.

Threat severity

In Chapter 2, we provided the following definition related to threat severity:

“The threat severity relates to the impact of the threat if it is not detected. The greater this impact, the higher the severity.”

Therefore, when assessing a detection requirement’s threat severity score, we are considering how severe the impact of a threat not being detected and the attacker successfully carrying out their activities would be. Table 5.3 provides scoring criteria for the Threat Severity category:

Score

Description

Example

1

The threat is passive but could lead to further malicious activity.

Reconnaissance scans against public-facing servers: Common activity but not inherently malicious until the threat actor abuses findings.

2

The threat is actively in the environment but presents a low risk at this stage in the kill chain.

A phishing email is malicious but without the context of the follow-up actions, is not a critical threat by itself.

3

The threat presents a severe threat to the organization.

Ransomware payload execution: If a ransomware payload executes, it risks taking down critical components of the organization and having a major financial impact.

Table 5.3 – Threat severity scoring

As can be seen from the definitions, we define threat severity based on the specific activity being detected, not the context of a wider attack.

Organizational alignment

Using organizational alignment as a method for triaging threats involves understanding how external threats’ motivations, targets, capabilities, and infrastructure overlap with the industry, organization, infrastructure users, and data that is being protected. Threat intelligence should be used to identify which threats align most with your organization’s profile.

Score

Description

Example

0

The threat is irrelevant to the organization.

A threat targeting macOS endpoints in a Windows-only environment.

1

The threat is likely not going to target your organization but the risk still exists.

A threat actor primarily focused on targeting a geography outside your own but known to target your industry.

2

The threat is widespread/untargeted.

A mass Emotet malspam campaign.

3

The threat specifically targets your organization.

Known threat based on internal observations or a threat actor known to target your geography and industry.

Table 5.4 – Organizational alignment scoring

Filling out this table is likely going to be more challenging than the one for threat severity as threat severity is determined without the organizational context. Organizational alignment requires you to first understand your environment and then identify based on research how relevant a given threat is given the additional context.

Detection coverage

Prioritizing detection requirements has a clear relationship with which detections would have the greatest return on investment for the organization. Part of this is determining whether we are already covered for all or part of a requirement. Obviously, if we already have in-depth coverage for the requirement, it shouldn’t even be in the queue. Creating a detection for something that we can’t currently detect will receive a higher score over improving the detection rate of an existing detection since the impact it has on our overall coverage is greater.

Score

Description

0

In-depth coverage is already provided for this specific technique.

1

This technique requires an update to the scope of an existing detection.

2

No coverage for this requirement exists. A new detection is required.

Table 5.5 – Detection coverage scoring

With the detection coverage score documented, we can either move on to calculating the cumulative score priority or, if exploits are involved, we need to also determine the Active Exploits score.

Active exploits

The Active Exploits score should only be used in the scoring process if the detection requirement involves detecting a specific exploit. For this category, we have to consider a couple of factors, which means we have multiple scoring tables. Specifically, we’ll look at the relevance of the exploit to the organization and the prevalence of the threat.

First, identify whether the organization is vulnerable to the exploit based on which technologies and software versions are affected:

Score

Description

0

The organization is not vulnerable to the exploit.

1

The organization is vulnerable to the exploit but the turnaround time of a patch is quick.

2

The organization is vulnerable and a patch is unavailable or will not be deployed soon.

Table 5.6 – Active exploit (relevance) scoring

While scores of 1 or 2 both relate to an organization being vulnerable, we differentiate by whether a patch is available and to be implemented soon as this greatly impacts priority. If we know a patch is coming to mitigate the risk soon, that could be the difference between working on this detection requirement and another one. By the time we develop, test, and deploy a detection, it’s possible that a patch will have already been released. As such, we want to ensure that we don’t just think about the threat at the exact moment but whether it will still be relevant by the time the detection is released.

One thing not included in the scoring criteria for an organization’s vulnerability but worth considering is if only a specific geographic region or industry is being targeted with the exploit, it may be worth considering the requirement irrelevant. You should assess the likelihood of the exploit becoming more widely leveraged based on the next scoring table. Specifically, this table focuses on how likely it is that the vulnerability will be exploited based on public reporting of the availability of exploit code and observed activity.

Score

Description

1

No exploit code or in-the-wild activity has been observed.

2

Some in-the-wild activity has been observed but no public exploit code is available.

3

Exploit code is publicly available and actively being used by threat actors.

Table 5.7 – Active exploit (prevalence) scoring

Now that we understand how to score each of these factors, we can dive into combining the results into a priority to close out the Triage phase.

Calculating priority

Assuming you’ve assigned a score for each category above, we’re going to perform a simple sum, as shown in Figure 5.3, to assign a priority score for the entire detection requirement.

Figure 5.3 – Priority score formula

Figure 5.3 – Priority score formula

Before we calculate the score though, we first want to make sure no there are no categories with a score of 0, as this will indicate the detection requirement should be rejected instead.

The following cases of a 0 score indicate that the detection requirement should be marked as irrelevant with context as to why and returned to the requestor:

  • If organizational alignment receives a score of 0, the detection requirement can be rejected as this means that the organization is not at risk for the given threat
  • If detection coverage receives a score of 0, this means that the detection requirement is already covered and should be returned to the requestor, referring them to the existing detection
  • If the relevance of active exploits receives a score of 0, it means that the organization is not vulnerable to the related exploit so a detection is not required

Assuming there are no zero scores, then the priority can be calculated by summing together all the scores. Let’s look at three examples of this being implemented at a tech company in the United States to show how it can help us triage requirements:

  • Detection Requirement #1: SOC is requesting a detection be put in place for an Emotet malspam campaign they are actively observing. They report a remote access Trojan (RAT) is installed on the host at the end of the infection chain:
    • Threat Severity: 3. The RAT will allow for remote code execution, which is a severe threat.
    • Organizational Alignment: 2. Emotet malspam is a widespread attack and not specifically targeting the organization.
    • Detection Coverage: 2. Let’s assume no detections currently exist for this threat.
    • Active Exploits: N/A.
    • Priority Score: 7
  • Detection Requirement #2: The intel team is requesting a detection for their research into a new threat actor primarily targeting utilities in the United States; however, the intel team believes that this group might change their targeting to include other verticals such as tech. The primary known TTP associated with this group is credential theft via phishing:
    • Threat Severity: 2. Credential theft is early in the kill chain and we don’t know what the attacker plans to do with the stolen credentials. Without the context of additional kill chain actions, we should leave this threat severity at 2.
    • Organizational Alignment: 1. While the threat actor has not directly attacked tech companies in the United States yet, they are active in the region in adjacent verticals and the intel team assesses that they may target the organization in the future.
    • Detection Coverage: 1. Let’s assume we have a detection for some of the phishing TTPs reported by intel, but there are several threat characteristics they want added to our detections.
    • Active Exploits: N/A.
    • Priority Score: 4
  • Detection Requirement #3: The red team is requesting a detection for exploitation of a recently announced vulnerability in Microsoft Exchange. They’ve assessed that the organization’s Exchange servers are vulnerable and an attacker could cause remote code execution if successfully exploited. A patch is available, but it is unclear how long it will take to get deployed in the environment. Widespread in-the-wild exploitation has been observed and public exploit code is available:
    • Threat Severity: 3. If successfully exploited, the threat actor will be able to perform remote code execution, which presents a severe threat.
    • Organizational Alignment: 2. The exploitation is targeted toward Microsoft Exchange, regardless of the specific organization involved, so it is essentially an untargeted/widespread attack.
    • Detection Coverage: 2. No detections currently exist for this threat since it’s a new exploit.
    • Active Exploits:
      • Relevance: 2. The red team has validated that the organization is vulnerable and is unsure of when a patch will be deployed.
      • Prevalence: 3. The red team reports that exploit code is publicly available and attacks are being seen in the wild.
      • Priority Score: 12

Now let’s assess what these priority scores tell us. Put simply, the higher the score, the sooner the requirement should be worked on. In the specific requirements discussed above, our scoring indicates that Detection Requirement 3 (Exchange exploit activity) should be worked on first, then Detection Requirement 1 (Emotet malspam campaign), and finally, Detection Requirement 2 (new threat actor).

This order makes sense when performing a non-mathematical assessment. First, we are going to get a detection in place for a widespread vulnerability actively being exploited for which we have no way to detect it. Once vulnerabilities have public exploit code and major attention, a significant number of threat actors will try to take advantage. After that, we can work on the active Emotet campaign. Emotet is widespread and untargeted but can have an impact, just potentially without the same aggressive and sudden activity we expect from a widely exploited vulnerability. Lastly, we can work on the intel team’s request, but it can go into the backlog since we have some relevant detections already and the threat actor has not yet attacked organizations in our vertical, nor do we know what the final impact of their attack would be. While there may be subjectivity in some cases due to additional factors relating to managerial decisions on what to prioritize, the formula above can provide some initial guidance.

In your ticketing system, set the ticket priorities accordingly. Even if an integer-based score is not used, the scores can help you put ticket priorities into the low, medium, high, or critical range. Table 5.8 provides a mapping of scores to priority levels.

Score

Priority Level

1-3

Low

4-6

Medium

7-9

High

10+

Critical

Table 5.8 – Priority levels by score

This completes the Triage phase and allows us to move on to the Investigate phase, performing the rest of the lifecycle for each detection requirement in the order determined by the priority scoring.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime