Before moving forward with the fun stuff, it is important to always remember the rules of engagement (ROE) when conducting an attack. The ROE are typically written out in the pre-engagement statement of work (SoW) and all testers must adhere to them. They outline expectations of the tester and set some limits to what can be done during the engagement.
While the goal of a typical penetration test is to simulate an actual attack and find weaknesses in the infrastructure or application, there are many limitations, and with good reason. We cannot go in guns blazing and cause more damage than an actual adversary. The target (client), be they a third party or an internal group, should feel comfortable letting professional hackers hammer away at their applications.
Good communication is key to a successful engagement. Kickoff and close-out meetings are extremely valuable to both parties involved. The client should be well aware of who is performing the exercise, and how they can reach them, or a backup, in case of an emergency.
The kickoff meeting is a chance to go over all aspects of the test, including reviewing the project scope, the criticality of the systems, any credentials that were provided, and contact information. With any luck, all of this information was included in the scoping document. This document's purpose is to clearly outline what parts of the infrastructure or applications are to be tested during the engagement. The scope can be a combination of IP ranges, applications, specific domains, or URLs. This document is usually written with the input of the client, well in advance of the test start date. Things can change, however, and the kickoff is a good time to go over everything one last time.
Useful questions to clarify during the kickoff meeting are as follows:
- Has the scope changed since the document's last revision? Has the target list changed? Should certain parts of the application or network be avoided?
- Is there a testing window to which you must adhere?
- Are the target applications in production or in a development environment? Are they customer-facing or internal only?
- Are the emergency contacts still valid?
- If credentials were provided, are they still valid? Now is the time to check these again.
- Is there an application firewall that may hinder testing?
The goal is generally to test the application and not third-party defenses. Penetration testers have deadlines, while malicious actors do not.
Tip
When testing an application for vulnerabilities, it is a good idea to ask the client to whitelist out IPs in any third-party web application firewalls (WAFs). WAFs inspect traffic reaching the protected application and will drop requests that match known attack signatures or patterns. Some clients will choose to keep the WAF in an enforcing mode, as their goal may be to simulate a real-world attack. This is when you should remind the clients that firewalls can introduce delays in assessing the actual application, as the tester may have to spend extra time attempting to evade defenses. Further, since there is a time limit to most engagements, the final report may not accurately reflect the security posture of the application.
Tip
No manager wants to hear that their critical application may go offline during a test, but this does occasionally happen. Some applications cannot handle the increased workload of a simple scan and will failover. Certain payloads can also break poorly-designed applications or infrastructure, and may bring productivity to a grinding halt.
Tip
If, during a test, an application becomes unresponsive, it's a good idea to call the primary contact, informing them of this as soon as possible, especially if the application is a critical production system. If the client is unavailable by phone, then be sure to send an email alert at minimum.
Close-out meetings or post-mortems are also very important. A particularly successful engagement with lots of critical findings may leave the tester feeling great, but the client could be mortified, as they must explain the results to their superiors. This is the time to meet with the client and go over every finding, and explain clearly how the security breach occurred and what could be done to fix it. Keep the audience in mind and convey the concerns in a common language, without assigning blame or ridiculing any parties involved.
Engagements that involve any kind of social engineering or human interaction, such as phishing exercises, should be carefully handled. A phishing attack attempts to trick a user into following an email link to a credential stealer, or opening a malicious attachment, and some employees may be uncomfortable being used in this manner.
Before sending phishing emails, for example, testers should confirm that the client is comfortable with their employees unknowingly participating in the engagement. This should be recorded in writing, usually in the SoW. The kickoff meeting is a good place to synchronize with the client and their expectations.
Unless there is explicit written permission from the client, avoid the following:
- Do not perform social engineering attacks that may be considered immoral, for example, using intelligence gathered about a target's family to entice them to click on a link
- Do not exfiltrate medical records or sensitive user data
- Do not capture screenshots of a user's machines
- Do not replay credentials to a user's personal emails, social media, or other accounts
Note
Some web attacks, such as SQLi or XML External Entity (XXE), may lead to data leaks, in which case you should inform the client of the vulnerability as soon as possible and securely destroy anything already downloaded.
While most tests are done under non-disclosure agreements (NDAs), handling sensitive data should be avoided where possible. There is little reason to hold onto medical records or credit card information after an engagement. In fact, hoarding this data could put the client in breach of regulatory compliance and could also be illegal. This type of data does not usually provide any kind of leverage when attempting to exploit additional applications. When entering proof in the final report, extra care must be taken to ensure that the evidence is sanitized and that it contains only enough context to prove the finding.
"Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy."
- Bruce Schneier
The preceding quote is generally aimed at companies with questionable practices when it comes to private user data, but it applies to testers as well. We often come across sensitive data in our adventures.
A successful penetration test or application assessment will undoubtedly leave many traces of the activity behind. Log entries could show how the intrusion was possible and a shell history file can provide clues as to how the attacker moved laterally. There is a benefit in leaving breadcrumbs behind, however. The defenders, also referred to as the blue team, can analyze the activity during or post-engagement and evaluate the efficacy of their defenses. Log entries provide valuable information on how the attacker was able to bypass the system defenses and execute code, exfiltrate data, or otherwise breach the network.
There are many tools to wipe logs post-exploitation, but unless the client has explicitly permitted these actions, this practice should be avoided. There are instances where the blue team may want to test the resilience of their security information and event monitoring (SIEM) infrastructure (a centralized log collection and analysis system), so wiping logs may be in scope, but this should be explicitly allowed in the engagement documents.
That being said, there are certain artifacts that should almost always be completely removed from systems or application databases when the engagement has completed. The following artifacts can expose the client to unnecessary risk, even after they've patched the vulnerabilities:
- Web shells providing access to the operating system (OS)
- Malware droppers, reverse shells, and privilege escalation exploit payloads
- Malware in the form of Java applets deployed via Tomcat
- Modified or backdoored application or system components:
- Example: overwriting the password binary with a race condition root exploit and not restoring the backup before leaving the system
- Stored XSS payloads: this can be more of a nuisance to users on production systems
Not all malware introduced during the test can be removed by the tester. Cleanup requires reaching out to the client.
Tip
Make a note of all malicious files, paths, and payloads used in the assessment. At the end of the engagement, attempt to remove as much as possible. If anything is left behind, inform the primary contact, providing details and stressing the importance of removing the artifacts.
Tip
Tagging payloads with a unique keyword can help to identify bogus data during the cleanup effort, for example: "Please remove any database records that contain the keyword: 2017Q3TestXyZ123."
A follow-up email confirming that the client has removed any lingering malware or artifacts serves as a reminder and is always appreciated.