Now that we have taken a look at the different attack vectors and some of the different ransomware variants and their attack patterns, I want to look at some of the common attack vectors in more depth, starting with identity-based attacks.
Identity-based attacks are becoming more and more common with the move to public cloud services such as Microsoft 365.
SaaS services have a common property, which is that they are available from the internet, which means that anyone can access the services.
As mentioned earlier, one of the common attack vectors is credential stuffing, where an attacker tries to log in with a list of usernames and/or email addresses that have been taken from a breach.
The following screenshot shows login attempts for one of our tenants, where it is typical that we see numerous login attempts each day from multiple locations.
This screenshot is a snippet from our sign-in log coming from Azure AD and parsed using Log Analytics (which I will cover in Chapter 7, Protecting Information Using Azure Information Protection and Data Protection):
Figure 1.7 – Overview of blocked authentication attempts to Office 365
Now, since this is Azure AD, Microsoft has built in different IP filters to stop login attempts coming from known malicious IP addresses, which means they are stopped before they can try and authenticate. However, this just shows how much authentication traffic is coming in a short period.
So, where are they coming from? How did the attackers find the user account that they are trying to log in to?
In many cases, attackers have different scrapers and scripts that crawl through websites to collect all the email addresses they can find. This can also include email addresses that were collected from an existing data breach.
A good way to see where credentials have been stolen from is by checking the affected email address at https://haveibeenpwned.com. The following screenshot shows the result where the email address was not breached:
Figure 1.8 – Information showing that no user information was found
However, if the information is found in one of the data breaches that the service has access to, the following result will appear:
Figure 1.9 – Information showing that user information was found in a breach
In some cases, the service will not display that passwords have been collected but that it only has email information collected. This is likely because the data source is not available at haveibeenpwnd.com
or the attackers have bots that scrape or crawl websites for information such as email addresses.
There are even free services online that can be used to extract emails from a URL, such as Specrom Analytics (https://www.specrom.com/extract-emails-from-url-free-tool/) or using a simple Python script that can do the same as well. Then, we can compare whether the user accounts where we are getting multiple authentication attempts are easily searchable from the public website.
One way to reduce the amount of spam and brute-force attacks against users’ identities is by limiting the amount of public information that is available.
For instance, if your corporate website is published behind a Web Application Firewall (WAF), you can block traffic based on user agents.
A user agent is a form of identity where the software (the browser) identifies itself to the web server. Most common browsers today use a user agent, for example, Mozilla/5.0 (Windows NT 10.0; Win64; x64), AppleWebKit/537.36 (KHTML, like Gecko), Chrome/97.0.4692.99, Safari/537.36, and Edge/97.0.1072.76.
Important note
You can use the following website to determine what kind of known user agents are used to crawl websites and what is legitimate end user traffic: https://user-agents.net/my-user-agent.
User agents are easily forged and can even be changed using built-in mechanisms within Google Chrome developer mode, for instance, but most automated crawling using scripts tends not to bother with changing the user agent.
So, in 4 hours, I have a lot of traffic coming to my public website, which is being crawled from someone that is running something that identifies as python-requests/2.26.0
, which is most likely an automated script to crawl my website:
Figure 1.10 – Web scraping attempts against my website in 4 hours using data collected in Azure Log Analytics
Having firewall rules in place to block a specific user agent would reduce the amount of crawling and would also reduce spam/phishing targeting our organization. However, if the attackers make the extra effort to alter the user agent, then blocking only certain user agents will have little effect.
Here is a great write-up on how to block or at least make it more difficult for crawlers to scrape your website: https://github.com/JonasCz/How-To-Prevent-Scraping.
Sometimes, your end user email addresses might be available on other sources that you might not have control over. However, a quick Google search can reveal some information about where the email address might be sourced.
Another way that access brokers or affiliates collect information is by using phishing attacks. There are many examples of this. One that we saw earlier this year is where users are sent an email that contains embedded links that take the victim to a phishing URL that imitates the Office 365 login page and prefills the victim’s username for increased credibility.
When the user tries to enter their username and password on the fake login page, there are scripts on the server that collect the user information and upload it to a central storage repository or on the server.
How are vulnerabilities utilized for attacks?
So, now that we have taken a closer look at some of the ways that attackers try to collect information about our end users either from scraping, phishing, or credential stuffing, we are going to take a closer look at some of the vulnerabilities that some of the different ransomware operators have been known to use in their attacks. Later in this section, I will go through how you can monitor vulnerabilities against your services.
Many of the vulnerabilities that we will go through are either utilized for initial compromise or to gain elevated access to a compromised machine and, lastly, lateral movement. The reason is to give you some understanding of how easy it can be to compromise a machine or a service and that the time before a high-severity vulnerability is known before ransomware operators start to leverage it is pretty short.
So, we are going to focus on the following vulnerabilities:
- PrintNightmare: CVE-2021-34527
- Zerologon: CVE-2020-1472
- ProxyShell: It consists of three different vulnerabilities that are used as part of a single attack chain: CVE-2021-34473, CVE-2021-34523, and CVE-2021-31207
- Citrix NetScaler ADC: CVE-2019-19781
PrintNightmare
Let’s start with PrintNightmare, which was a vulnerability that was published in July 2021. Using this vulnerability, an attacker could run arbitrary code with system privileges on a remote system and local system, so long as the Print Spooler service was enabled. So, in theory, you could utilize this vulnerability to make the domain controllers run arbitrary code, so long as the Print Spooler service was running. This is because of the functionality within a feature called Point and Print, which allows a user to automatically download config information about the printers directly from the print server to the client.
All Microsoft Common Vulnerabilities and Exposures (CVEs) get published on MSRC with dedicated support articles, highlighting which systems are affected and recommendations in terms of workaround and other countermeasures, as seen here for PrintNightmare: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-34527.
In regard to PrintNightmare, there were multiple scripts that the InfoSec community made that could easily be used; as an example, here’s a simple PowerShell payload that exploited the vulnerability, which did not require administrator access rights and comes with a predefined DDL file that creates a local admin account on the machine: https://github.com/calebstewart/CVE-2021-1675.
Benjamin Delpy, the creator of the popular tool called Mimikatz, also created a proof of concept by setting up a public print server that you could then use from an endpoint to connect to that public server, which would then automatically create a CMD pane running as a local system context.
It took Microsoft many weeks before they managed to provide patches and recommendations on how to fix this. In the middle of August, only 1 month later, there were already news articles about ransomware operators that were exploiting the PrintNightmare vulnerability to compromise organizations.
Microsoft provided recommendations when the vulnerability was known, which was to disable the Print Spooler service until they managed to provide a security fix. It also allowed many administrators to realize that the Print Spooler service is not required to run on servers that are not end user facing, such as Citrix/RDS servers.
Important note
A general best practice is to ensure that only required services are running on a service – for example, the Print Spooler service should not be running on a domain controller. This guidance document from Microsoft provides a list of the different services and recommendations for each of them: https://docs.microsoft.com/en-us/windows-server/security/windows-services/security-guidelines-for-disabling-system-services-in-windows-server.
Zerologon
Next, we have Zerologon, another high-severity CVE that exploits a vulnerability in the Netlogon process in Active Directory, which allows an attacker to impersonate any computer, including a domain controller.
To be able to leverage this vulnerability, the attack needed to be able to communicate with the domain controllers, such as having a Windows client that is joined to the Active Directory domain.
Then, the attackers would spoof another domain controller in the infrastructure and use the MS-NRPC protocol to change the password for the machine account in Active Directory, which is as simple as sending a simple TCP frame with the new password:
Figure 1.11 – Zerologon attack process
Once the new password had been accepted, the attackers could then use that new account to start new processes with an Active Directory domain controller context, which was then used to compromise the remaining infrastructure. Zerologon has been used in many ransomware attacks to, through lateral movement, compromise Active Directory and gain access to the domain controllers.
This vulnerability was fixed in a patch from Microsoft in August 2020. In September 2020, the security researchers from Secura who discovered the vulnerability issued their research, and within a week, there were already different proofs of concept published on how you can leverage the exploit. You can find the link to the initial whitepaper on the vulnerability here: https://www.secura.com/uploads/whitepapers/Zerologon.pdf.
In the months after, many organizations were hit by Ruyk, where they used the Zerologon vulnerability. On average, most security researchers state that it takes between 60 and 150 days (about 5 months) for an average organization to install a patch once it has been released by the vendor.
ProxyShell
Then, we have ProxyShell, which is a vulnerability consisting of three different CVEs used as part of a single attack chain that affected Microsoft Exchange 2013/2016/2019, which allowed attackers to do pre-authenticated RCE.
The main vulnerabilities lie in the Client Access Service (CAS) server component in Exchange, which is exposed to the internet by default to allow end users to access email services externally.
In short, the ProxyShell exploit does the following:
- Sends an Autodiscover request to leak the user's LegacyDN information with a known email address.
- Sends a MAPI request to the CAS servers to leak the user’s SID using the LegacyDN.
- Constructs a valid authentication token from the CAS service using the SID and email address.
- Authenticates to the PowerShell endpoint and executes the code using the authentication token. The example code can be found on GitHub at https://github.com/horizon3ai/proxyshell.
Horizon3.ai released a Python script to showcase how easy it is to exploit this vulnerability (https://github.com/horizon3ai/proxyshell), where you just need to run the script and point it to an Exchange CAS server.
All these vulnerabilities were patched in April 2021, but the information was published publicly in June 2021.
In February 2022, it was discovered that a significant number of organizations had failed to update their Exchange services, even though it was urgently required. More precisely, 4.3% of all Microsoft Exchange services that were publicly accessible were still unpatched for the ProxyShell vulnerability. Out of those that did apply the ProxyShell patch, 16% of organizations did not install the subsequent patches that were released from July 2021 onward, which left them open to attacks. As a result, many organizations had still not fully eliminated the vulnerability, even after six months had passed. As seen in the following Shodan screenshot from February 2022, there were still quite a high amount of public-facing Exchange servers that had the vulnerability present:
Figure 1.12 – Shodan search for vulnerable ProxyShell Exchange servers
Using a free account in Shodan.io, you can search for different applications and/or services and get an overview map of vulnerabilities. In this case, I used the http.component:"outlook web app"
search tag.
Citrix ADC (CVE-2019-19781)
Lastly, we have the vulnerability in the Citrix ADC (CVE-2019-19781), which was also a high-severity vulnerability that allowed unauthenticated attackers to write a file to a location on disk. It turned out that by using this vulnerability, you could run RCE on the ADC appliance.
This had multiple implications since an ADC is often a core component in the network to provide load balancing and reverse proxy services for different services. Therefore, it most likely had many network interfaces with access to different zones, and in many cases, had access to usernames/passwords and SSL certificates.
The vulnerability itself was exploiting a directory traversal bug that calls a Perl script, which is used to append files in XML format to the appliance. This is then processed by the underlying operating system. This, in turn, allows for RCE.
This caused a lot of turmoil, with close to 60,000 vulnerable Citrix ADC servers being affected, because the vulnerability was out and Citrix did not have a patch ready. The vulnerability became public at the end of 2019, while Citrix had an expected timeframe of patches being available at the end of January 2020. This vulnerability also affected four major versions of the ADC platform, which also meant that the patch needed to be backported to earlier versions, which affected the timeline of when the patch could be ready.
While Citrix provided a workaround to mitigate the vulnerability, this did not work for all software editions because of licensing issues, with features that were not available.
Eventually, the patch was released and the vulnerability was closed, but many ADC instances were compromised. Many got infected with simple bitcoin mining scripts and others were used to deploy web shells.
One group, which was later referred to as Iran Network Team, created a web shell on each of the ADC appliances that they compromised. The group was pretty successful in deploying a backdoor to a significant number of ADC appliances. Many of these appliances were already patched but were still vulnerable due to the password-less backdoor left open on their devices by the attackers. This web shell could easily be accessed using a simple HTTP POST command.
In addition, another threat actor created a new backdoor named NOTROBIN. Instead of deploying a web shell or bitcoin mining, they would add their own shell with a predefined infection key. In addition, they would attempt to identify and remove any existing backdoors, as well as attempt to block further exploitation of the affected appliances. They did this by deleting new XML files or scripts that did not contain a per-infection secret key. This meant that a compromised ADC appliance was only accessible through the backdoor with the infection key.
Looking back at these vulnerabilities that I’ve covered, many of them were used as part of a ransomware attack. It is important to note the following:
- The time between when a vulnerability is discovered and an attacker starts exploiting it is becoming shorter and shorter.
- You should always apply security patches as soon as possible because in many cases, you might not realize the impact of a vulnerability until it is too late.
- After a vulnerability is known, if it takes too much time to install the patch to remediate it, chances are that someone might have already exploited the vulnerability.
- Also, in many cases, an attacker might have already been able to exploit the vulnerability to create a backdoor that might still be utilized even after the patch is installed.
- Many vulnerabilities evolve after the initial publication. This means that after a vulnerability becomes known, many security researchers or attackers can find new ways to use the vulnerability or find vulnerabilities within the same product/feature/service, as was the case with PrintNightmare.
- The amount of CVEs is increasing year by year: https://www.cvedetails.com/browse-by-date.php.
- High-severity vulnerabilities are not limited to Windows. This also affects other core components, including firewalls and virtualization platforms such as VMware.
- Vulnerabilities from a ransomware perspective can be used for both initial access and lateral movement, depending on what kinds of services are affected by the vulnerability.
Now that we have taken a closer look at some of the different attack vectors, such as identity-based attacks, and also looked at some of the vulnerabilities that have been utilized for ransomware attacks, such as PrintNightmare and Zerologon, let’s take a closer look at how to monitor for vulnerabilities.
Monitoring vulnerabilities
There will always be vulnerabilities and bugs, so it is important to pay attention to updates that might impact your environment.
An average business today might have somewhere between 20 and 100 different pieces of software installed within their environment. This might also include software from the same number of vendors. Consider using the following software if you are a small company running an on-premises environment:
- VMware: Virtualization
- Fortinet: Firewall
- HP: Hardware and server infrastructure
- Citrix: Virtual apps and desktop
- Microsoft Exchange: Email
- Microsoft SQL: Database
- Windows: Clients and servers
- Chrome: Browser
- Microsoft Office: Productivity
- Cisco: Core networking, wireless
- Adobe: PDF viewer/creator
- Apache HTTP: Web server
In addition to this, end users have their own applications that they need and there may be other line-of-business applications that you might need as part of your organization. Here, we have already listed over 10 different vendors and many applications/products that need to be patched. How do we maintain control and monitor for vulnerabilities?
This falls into a category called vulnerability management, which is the practice of identifying and remediating software vulnerabilities. Remediating software vulnerabilities is done either through configuration changes or, in most cases, by applying software patches from the vendors. We will go into using tooling to patch infrastructure and services in Chapter 10, Best Practices for Protecting Windows from Ransomware Attacks, but one thing I want to cover is how to monitor vulnerabilities.
While many commercial products can be used, I also tend to use other online data sources, which are listed as follows, and also many sources on social media have been extremely useful.
For example, you can use a centralized RSS feed to monitor security advisories from different vendors. This is the most common tool that I use to monitor vulnerabilities from vendors. Most websites have an RSS feed that I can collect into an RSS reader such as Feedly. Some of the RSS feeds that I use are the following:
In addition to the different software vendors, I also follow the centralized RSS feed from NIST. However, this is not vendor-specific, so often, I use it to correlate information that’s vendor-specific to NIST.
It should be noted that, depending on the different vendors you use, monitoring all these RSS feeds can be a time-consuming and repetitive process. In many cases, you should limit the amount of RSS feeds to a minimum. Some vendors also have good filtering capabilities so that you do not get information about vulnerabilities related to products you do not have. Going through the information from these feeds is something that should be turned into a routine. In larger IT teams, this task should be rotated between multiple people – for instance, you should have someone responsible for going through the information and presenting relevant information on Monday mornings.
While RSS feeds are one way to get this information, I also use some other online sources to monitor the current landscape:
- Vulmon: This provides an automated way to get alerts and notifications related to vulnerabilities and can be mapped to products. You can get a view of the latest vulnerabilities here: https://vulmon.com/searchpage?q=*&sortby=bydate. In addition, you can use Vulmon as a search engine to find related vulnerabilities and more information.
- Social media: Twitter can be an extremely useful service for monitoring current threats/vulnerabilities. As an active Twitter user myself, I have some main sources that I follow to stay up to date on current threats/vulnerabilities:
- vFeed Inc. Vulnerability Intelligence As A Service (
@vFeed_IO
) - Threat Intel Center (
@threatintelctr
)
There are also products from third-party vendors that can automate this to scan the environment and look at current vulnerabilities, such as services from Qualys and Rapid7, which can be good tools to have in your toolbox when you are mixing a lot of third-party services in a large environment. It should be noted that these products do not have 100% coverage on all products/vendors, so it is still important that you have a mapping of your current application vendors and the services/applications they are providing, as well as ensuring that you are monitoring the status of each application.