Post-incident activity
The incident priority may dictate the containment strategy—for example, if you are dealing with a DDoS attack that was opened as a high-priority incident, the containment strategy must be treated with the same level of criticality. It is rare that situations where the incident is opened as high severity are prescribed medium-priority containment measures unless the issue was somehow resolved in between phases.
Let’s have a look at two real-world scenarios to see how containment strategies, and the lessons learned from a particular incident, may differ depending on incident priority.
Real-world scenario 1
Let’s use the WannaCry outbreak as a real-world example, using the fictitious company Diogenes & Ozkaya Inc. to demonstrate the end-to-end incident response process.
On May 12, 2017, some users called the help desk saying that they were receiving the following screen:
Figure 2.5: A screen from the WannaCry outbreak
After an initial assessment and confirmation of the issue (detection phase), the security team was engaged, and an incident was created. Since many systems were experiencing the same issue, they raised the severity of this incident to high. They used their threat intelligence to rapidly identify that this was a ransomware outbreak, and to prevent other systems from getting infected, they had to apply the MS17-00(3) patch.
At this point, the incident response team was working on three different fronts: one to try to break the ransomware encryption, another to try to identify other systems that were vulnerable to this type of attack, and another one working to communicate the issue to the press.
They consulted their vulnerability management system and identified many other systems that were missing this update. They started the change management process and raised the priority of this change to critical. The management system team deployed this patch to the remaining systems.
The incident response team worked with their anti-malware vendor to break the encryption and gain access to the data again. At this point, all other systems were patched and running without any problems. This concluded the containment eradication and recovery phase.
Lessons learned from scenario 1
After reading this scenario, you can see examples of many areas that were covered throughout this chapter and that will come together during an incident. But an incident is not finished when the issue is resolved. In fact, this is just the beginning of a whole different level of work that needs to be done for every single incident—documenting the lessons learned.
One of the most valuable pieces of information that you have in the post-incident activity phase is the lessons learned. This will help you to keep refining the process through the identification of gaps in the process and areas of improvement. When an incident is fully closed, it will be documented. This documentation must be very detailed, with the full timeline of the incident, the steps that were taken to resolve the problem, what happened during each step, and how the issue was finally resolved outlined in depth.
This documentation will be used as a base to answer the following questions:
- Who identified the security issue, a user or the detection system?
- Was the incident opened with the right priority?
- Did the security operations team perform the initial assessment correctly?
- Is there anything that could be improved at this point?
- Was the data analysis done correctly?
- Was the containment done correctly?
- Is there anything that could be improved at this point?
- How long did it take to resolve this incident?
The answers to these questions will help refine the incident response process and enrich the incident database. The incident management system should have all incidents fully documented and searchable. The goal is to create a knowledge base that can be used for future incidents. Oftentimes, an incident can be resolved using the same steps that were used in a similar previous incident.
Another important point to cover is evidence retention. All the artifacts that were captured during the incident should be stored according to the company’s retention policy unless there are specific guidelines for evidence retention. Keep in mind that if the attacker needs to be prosecuted, the evidence must be kept intact until legal actions are completely settled.
When organizations start to migrate to the cloud and have a hybrid environment (on-premises and connectivity to the cloud), their IR process may need to pass through some revisions to include some deltas that are related to cloud computing. You will learn more about IR in the cloud later in this chapter.
Real-world scenario 2
Sometimes you don’t have a very well-established incident, only clues that you are starting to put together to understand what is happening. In this scenario, the case started with support, because it was initiated by a user that said that their machine was very slow, mainly when accessing the internet.
The support engineer that handled the case did a good job isolating the issue and identified that the process Powershell.exe
was downloading content from a suspicious site. When the IR team received the case, they reviewed the notes from the case to understand what was done. Then they started tracking the IP address to where the PowerShell command was downloading information from. To do that, they used the VirusTotal website and got the result below:
Figure 2.6: VirusTotal scan result
This result raised a flag, and to further understand why this was flagged as malicious, they continued to explore by clicking on DETAILS, which led them to the result below:
Figure 2.7: VirusTotal scan details tab
Now things are starting to come together, as this IP seems to be correlated with Cobalt Strike. At this point, the IR team didn’t have much knowledge about Cobalt Strike, and they needed to learn more about it. The best place to research threat actors, the software they use, and the techniques they leverage is the MITRE ATT&CK website (attack.mitre.org).
By accessing this page, you can simply click the Search button (located in the upper-right corner) and type in the keywords, in this case, cobalt strike, and the result appears as shown below:
Figure 2.8: Searching on the MITRE ATT&CK website
Once you open the Cobalt Strike page, you can read more about what Cobalt Strike is, the platforms that it targets, the techniques that it uses, and the threat actor groups that are associated with this software. By simply searching PowerShell on this page, you will see the following statement:
Figure 2.9: A technique used by Cobalt Strike
Notice that this usage of PowerShell maps to technique T1059 (https://attack.mitre.org/techniques/T1059). If you open this page, you will learn more about how this technique is used and the intent behind it.
OK, now things are clearer, and you know that you are dealing with Cobalt Strike. While this is a good start, it is imperative to understand how the system got compromised in the first place, because PowerShell was not making a call to that IP address out of nowhere, something triggered that action.
This is the type of case where you will have to trace it back to understand how everything started. The good news is that you have plenty of information on the MITRE ATT&CK website that explains how Cobalt Strike works.
The IR team started looking at different data sources to better understand the entire scenario and they found that the employee that initially opened the case with support complaining about the computer’s performance opened a suspicious document (RTF) that same week. The reason to say that this file was suspicious was the name and the hash of the file:
- File name: once.rtf
- MD5: 2e0cc6890fbf7a469d6c0ae70b5859e7
If you copy and paste this hash into VirusTotal search, you will find a tremendous number of results, as shown below:
Figure 2.10: Searching for a file hash
This raises many flags, but to better correlate this with the PowerShell activity, we need more evidence. If you click on the BEHAVIOR tab, you will have that evidence, as shown below:
Figure 2.11: More evidence of malicious use of PowerShell
With this evidence, it is possible to conclude that the initial access was via email (see https://attack.mitre.org/techniques/T1566) and from there the attached file abuses CVE-2017-11882 to execute PowerShell.
Lessons learned from scenario 2
This scenario shows that all you need is a simple click to get compromised, and social engineering is still one of the predominant factors, as it exploits the human factor in order to entice a user to do something. From here the recommendations were:
- Improve the security awareness of training for all users to cover this type of scenario
- Reduce the level of privileges for the user on their own workstations
- Implement AppLocker to block unwanted applications
- Implement EDR in all endpoints to ensure that this type of attack can be caught in the initial phase
- Implement a host-based firewall to block access to suspicious external addresses
There is a lot to learn with a case like this, mainly from the security hygiene perspective and how things can get better. Never lose the opportunity to learn and improve your incident response plan.