Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Incident Response for Windows

You're reading from   Incident Response for Windows Adapt effective strategies for managing sophisticated cyberattacks targeting Windows systems

Arrow left icon
Product type Paperback
Published in Aug 2024
Publisher Packt
ISBN-13 9781804619322
Length 244 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Anatoly Tykushin Anatoly Tykushin
Author Profile Icon Anatoly Tykushin
Anatoly Tykushin
Svetlana Ostrovskaya Svetlana Ostrovskaya
Author Profile Icon Svetlana Ostrovskaya
Svetlana Ostrovskaya
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1: Understanding the Threat Landscape and Attack Life Cycle
2. Chapter 1: Introduction to the Threat Landscape FREE CHAPTER 3. Chapter 2: Understanding the Attack Life Cycle 4. Part 2: Incident Response Procedures and Endpoint Forensic Evidence Collection
5. Chapter 3: Phases of an Efficient Incident Response on Windows Infrastructure 6. Chapter 4: Endpoint Forensic Evidence Collection 7. Part 3: Incident Analysis and Threat Hunting on Windows Systems
8. Chapter 5: Gaining Access to the Network 9. Chapter 6: Establishing a Foothold 10. Chapter 7: Network and Key Assets Discovery 11. Chapter 8: Network Propagation 12. Chapter 9: Data Collection and Exfiltration 13. Chapter 10: Impact 14. Chapter 11: Threat Hunting and Analysis of TTPs 15. Part 4: Incident Investigation Management and Reporting
16. Chapter 12: Incident Containment, Eradication, and Recovery 17. Chapter 13: Incident Investigation Closure and Reporting 18. Index 19. Other Books You May Enjoy

Introduction to endpoint evidence collection

Evidence collection is not a standalone process. It is a part of the forensic evidence life cycle, which came from classical digital forensics and consists of the following steps:

  1. Collection: This is a set of procedures, tools, and techniques used for quick and efficient identification and acquisition of evidence from computers, servers, or mobile devices.
  2. Review: After collection, the evidence undergoes a preliminary review to assess its relevance and quality. This helps in determining whether the evidence can support or refute the claim or suspicion under investigation.
  3. Chain-of-custody: This involves documenting every individual who handled the evidence and what alterations, if any, were made. Proper chain-of-custody ensures that the evidence has not been tampered with and establishes its provenance.
  4. Documentation: This involves creating a detailed record of the evidence and the circumstances under which it was collected, reviewed, and analyzed. Proper documentation reinforces the integrity of the investigative process and establishes a record that can be audited later for accountability.
  5. Analysis: This is an approach describing parsing, extracting, and examining the data. Forensic analysts use various techniques and tools to scrutinize the evidence deeply. The aim is to draw conclusions that are both scientifically sound and legally admissible.
  6. Preservation: Once the analysis is complete, the evidence must be preserved in a secure environment to prevent tampering, decay, or loss. Proper preservation methods depend on the type of evidence but can include secure digital storage or climate-controlled physical storage.
  7. Retention: Evidence needs to be retained for a period as defined by legal requirements or organizational policies. This is especially important because appeals or additional investigations might require the re-examination of the evidence.

The process that we have just described is forensically sound and will facilitate not only successful internal incident response but also all possible escalations to law enforcement to open a legal case.

The collected evidence must meet the following criteria:

  • Authenticity: The original source must be documented and must follow the chain-of-custody process.
  • Reliability: The collected data must be consistent and acquired by using a forensically sound methodology.
  • Integrity: The data should not be tampered with. Time and cryptographic hash sums must be provided (SHA1, SHA256, and so on)
  • Relevance: The contextual information surrounding the evidence (time, location, individuals involved, and so on) should also be relevant and clearly defined, as well as pertinent to the case at hand, helping to prove or disprove a fact in question.

Speaking of the shortcuts, the forensic evidence life cycle of the internal incident response process should be as follows:

  1. Data collection from the specified endpoints:
    • Online and offline collection in case the host is detached from the network
    • Log all activities, such as the following:
      • Date and time of data collection
      • Target host details
      • List of collected artifacts including their full path, metadata (all timestamps), and hashsums (SHA1, SHA256, and so on)
      • User account triggered collection
      • Pack output to archive or any forensic format by specified path.
  2. Documentation can be performed later based on the tool log.
  3. Chain of custody should be implemented by design. This means that all analysis activity of the artifacts by responsible team members should also be recorded, as well as the analysis results.
  4. Ensure data preservation by copying collected evidence to a separate share to protect from tampering.
  5. Immediate parsing actions can be performed on the analyst workstation after the triage has been received. The more automation, the better.
  6. The evidence should be available usually for the next three years or longer. Being subject to local regulations and law enforcement processes, incident response reports should be available forever.

Note

There is one more important rule of data collection: never store it on the same endpoint, especially on the C: drive. Remember: the fewer manipulations are made on a suspected device, the higher the chances of successful analysis. The best practice is to set up a network share or an SFTP file server, or to attach an external drive and put artifacts on it. In case of simultaneous data collection from multiple endpoints, the network bandwidth utilization must be considered. Usually, triage collection of artifacts weighs between 50 MB to a few GB depending on the endpoint event log’s size (sometimes security, system, and application log sizes can be increased by group policies).

So, now we understand the importance of the evidence-collection process and the shortcuts. Let’s get into the recipe for creating it.

First things first, we need to understand what should be collected. To properly scope it, we might look at the incident detection message’s contents. As discussed in the previous chapter, the message shall shed light on the affected infrastructure and provide a brief description of the finding. There are different types of messages that could be received regardless of the incident trigger:

  • Potential or confirmed malware activity on the endpoint
  • Suspicious user activity (abnormal logins, policy violations)
  • Suspicious network activity (potential data exfiltration or network discovery) in progress
  • Suspicious emails (incoming or outgoing)
  • Data exfiltration or other post-compromise facts (ransomware, dedicated leak site, business disruption, wiped data, competitor activity coordinated by potentially leaked data, or compromised accounts) are observed
  • Other cases

Based on the message, the analyst should decide which data sources to use and what to extract from them to investigate the case. There are two main types of evidence sources: volatile and non-volatile.

Volatile data is stored in temporary storage areas and lost once the system is powered off or rebooted. Volatile artifacts are transient and change rapidly even during normal system operation. Examples of such evidence sources are endpoint RAM, network traffic, and hardware caches (CPU, SSD, or microcontrollers). In our practice, there were a few cases wherein an SSD media cache was used for conducting a forensic examination of the drive. However, this is the least used evidence in investigations in the wild.

Non-volatile data persists on permanent storage mediums, such as hard drives or flash storage, even when the system is powered off. This data is more stable and remains until it is deleted or overwritten. We can count storage options as non-volatile sources themselves, as well as separate files stored on the endpoints (event logs, telemetry, files, filesystem metadata, or registry hives), logs or telemetry collected by security controls (network connections logs, Endpoint Detection and Response (EDR) telemetry, or alerts), audit logs (hypervisor, cloud provider, third-party solutions, CRM, ERP, PAM, or backup), and so on.

There are several ways to analyze and collect evidence from both volatile and non-volatile sources. Volatile data can be analyzed in real time or dumped either fully or partially for later analysis. Non-volatile data can be collected as part of triage (a set of specific artifacts chosen by an incident response specialist or pre-defined in the tool used for collection), logical, or physical disk image.

Over the past ten years, in-depth analysis of a set of forensic images during the incident response has been replaced by relatively lightweight and fast triaging. To stay efficient, we must realize that collecting some digital evidence will take quite some time. Imagine how long would it take to acquire a forensic image of hard disks of sizes of 1 TB and more, or the RAM contents of a powerful server (usually above 64 GB). It starts at hours and sometimes cannot be scaled, while triage collection is usually done within 5-15 minutes and immediately shared for analysis.

Nevertheless, there are still some situations where collection of heavyweight images and dumps is still reasonable.

Image acquisition can be required in the following situations:

  • When there is improper visibility of the endpoint. For example, there might be lack of security controls and data collection tools, which results in multiple back and forth communications with the responsible team to request various necessary pieces of data.
  • When there is a lack of processes. For example, this could apply if it takes too long to request data from the responsible team.
  • When there is a chance of threat actors applying defense evasion techniques. For example, they might opt for file deletion, event log wiping, data corruption (partition table crush), and so on.

A full dump of RAM contents may help with the following:

  • An inability to scan process memory with custom signatures using existing security controls
  • An inability to scope malicious activity in runtime, for example, when living-off-the-land techniques are used
  • Malicious process injection and lack of visibility with single-process memory dumps
  • Situations where in-memory data structures are required (kernel and user mode)
  • Situations where analysis of a process’ decrypted strings, structures, buffers, APIs, and function calls is required

Network traffic dumps can be used to do the following:

  • Cover the blind spots of network security controls, including a lack of visibility across internal network segments or limited investigation capabilities of the existing solution (yes, we’ve seen a lot!)
  • Confirm a sophisticated, previously unseen attack vector

Now that we know when to use different evidence sources, it is time to talk about tools. We can use to collect the data of interest. There are three groups of tools we can use:

  • Built-in tools (CMD, Windows Management Instrumentation (WMI), PowerShell) can be used to analyze and/or collect data such as active processes, network connections, registry keys, files, or events from event logs
  • Live response tools (KAPE, LiveResponseCollection, CyLR, FastIR, Velociraptor, and other open source or vendor-supplied tools) allow us to collect triages
  • Imaging tools (AccessData FTK Imager, Encase, and live distributions) are used to create logical or physical disk images, forensic triages, and memory dumps

Depending on the functionality and the way in which these tools operate, we can also split them into several groups, as described in Table 4.1:

Category

Type

Description

Triage

Agentless

This involves connecting via a remote command execution method such as PSRemoting, WinRM, WMI, and so on.

Standalone agent

This is an offline or online agent that collects necessary artifacts by using a predefined list or by using custom configured targets.

Built intosecurity control

This is used as part of an EDR solution functionality to grab artifacts by analysts’ requests or as part of an automated playbook.

Image

Live acquisition

This is implemented as a system driver allowing users to capture a filesystem level or access the raw disk contents. This approach has a drawback in that storage contents are volatile and will be in a different state between the start and end of the acquisition. It is suggested to be used in case there will be no escalations to law enforcement, as the chain of custody will be broken due to altered sources of data.

Please note that in the case of Virtual Machines (VMs), imaging can easily be performed without interrupting current business operations by creating snapshots or cloning VMs.

Live USB

This is forensic OS distribution that uses a driver, which prevents writing operations on the disks from being installed on a live USB. To perform the procedure of taking an image of the storage medium, the flash media is installed on the computer, the OS is booted from this medium, and a bit copy is made of the source of digital evidence.

Hardware write-blocker

The acquisition requires that a suspected host shut down or cold detach from the data storage, plugging in to the write blocker and push-button approach to perform imaging

Network

Endpoint-level

This works as a standalone tool with the driver, which captures raw packet contents and forwards them to the file to be stored.

Network-wide

This uses SPAN, R-SPAN, and ER-SPAN to forward raw packets from the network device (hub, switch, router) pins to any network traffic analysis solutions (Wireshark, Arkime).

Table 4.1 – An overview of evidence collection tools

In the next section, we will provide valuable insights into each category of tools, with examples of their use.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime