Threat intelligence sharing
Security teams can find themselves in situations where they want to share CTI they possess with security teams at other organizations or vice versa. There are lots of different scenarios where this happens. For example, a parent company wants all the security teams at its subsidiaries to share CTI with each other. Another example is an industry-specific Information Sharing and Analysis Center (ISAC) that facilitates CTI sharing among its member organizations. Sharing CTI across organizations in the same industry could make it more challenging for attackers to victimize individual members, because they all have the TTPs that threat actors use when targeting the industry. For this reason, the financial services industry and the healthcare industry both have ISACs, as examples.
NIST published Special Publication 800-150, Guide to Cyber Threat Information Sharing, which provides some guidelines for sharing CTI, as well as a good list of scenarios where sharing CTI can be helpful.
The benefits of sharing CTI that the authors cite are numerous, including shared situational awareness, improved security posture, knowledge maturation, and greater defensive agility. (NIST Special Publication 800-150 Badger et al, October 2016).
However, sharing CTI can be more complicated than it sounds. Sharing CTI is not without risk. Sensitive information, like Personally Identifiable Information (PII), can be swept up as part of an investigation into an intrusion. If its context and sensitivity are lost and the CTI is shared without the proper safeguards, it could be used as an example of how the organization failed to keep its regulatory compliance obligations to standards like PCI DSS, SOX, GDPR, and a host of others. For public sector organizations that possess classified information that requires security clearances to access, information sharing programs can be fraught with challenges that make sharing information hard or impossible. Because of all the sensitivities and potential land mines, many organizations that decide to share CTI do so anonymously. However, CTI that isn’t attributed to a credible source might not inspire the requisite confidence in its quality among the security teams that receive it, and go unused.
If your security team is considering sharing CTI with other organizations, I suggest they leverage NIST Special Publication 800-150 to inform their deliberations.
CTI sharing protocols
I can’t discuss sharing CTI without at least mentioning some of the protocols for doing so. Recall that protocols are used to set rules for effective communication. Some protocols are optimized for human-to-human communication, while others are optimized for machine-to-machine (automated) communication, machine-to-human communication, and so on. The three protocols I’ll discuss in this section include Traffic Light Protocol (TLP), Structured Threat Information eXpression (STIX), and Trusted Automated eXchange of Indicator Information (TAXII).
Traffic Light Protocol
The Traffic Light Protocol (TLP) has become a popular protocol for sharing CTI and other types of information. TLP can help communicate the expected treatment of CTI shared between people. I don’t think it is especially optimized for automated CTI sharing between systems – it’s really a protocol for humans to use when sharing potentially sensitive information with each other. For example, if a CTI team decides to share some CTI with another CTI team or a CIRT via email or in a document, they could use TLP.
TLP helps set expectations between the sender of the information and the receiver of the information on how the information should be handled. The sender is responsible for communicating these expectations to the receiver. The receiver could choose to ignore the sender’s instructions. Therefore, trust between sharing parties is very important. The receiver is trusted by the sender to honor the sender’s specified information sharing boundaries. If the sender doesn’t trust the receiver to honor their expectations, they shouldn’t share the CTI with the receiver.
As its name suggests, TLP uses a traffic light analogy to make it easy for people to understand information senders’ expectations and their intended information sharing boundaries. The “traffic light” analogy in this case has four colors: red, amber, green, and clear (FIRST, n.d.). The colors are used to communicate different information sharing boundaries, as specified by the sender. The rule the protocol sets is that the color be specified as follows, when the CTI is being communicated in writing (in an email or document): TLP:COLOR. “TLP:” is followed by one of the color names in caps – for example, TLP:AMBER.
TLP:RED specifies that the shared information is “not for disclosure, restricted to participants only” (FIRST, n.d.). Red tells the receiver that the sender’s expectation is that the information shared is not to be shared with other people. The information is limited to only the people the sender shared it with directly and is typically communicated verbally as a further step to limit how the information can be shared, and to make it harder to attribute the information to a particular sender, thus protecting their privacy. Senders use this color when they want to limit the potential impact on their reputation or privacy and when other parties cannot effectively act on the information shared.
TLP:AMBER specifies “limited disclosure, restricted to participants’ organizations” (FIRST, n.d.). Receivers are only permitted to share TLP:AMBER information within their own organization and with customers with a need to know. The sender can also specify more restrictions and limitations that it expects the receivers to honor.
TLP:GREEN permits “limited disclosure, restricted to the community” (FIRST, n.d.). Senders that specify TLP:GREEN are allowing receivers to share the information with organizations within their community or industry, but not by using channels that are open to the general public. Senders do not want the information shared outside of the receiver’s industry or community. This is used when information can be used to protect the broader community or industry.
Lastly, using TLP:CLEAR means the “disclosure is not limited” (FIRST, n.d.). In other words, there are no sharing restrictions on information that is disclosed using TLP:CLEAR. Receivers are free to share this information as broadly as they like.
This is meant to be used when sharing information has minimal risk.
The TLP designation should be used when sharing CTI via email or documents. Convention dictates that emails should have the TLP designation in the subject line and at the top of the email, while the designation should appear in the page headers and footers in documents (CISA, n.d.). This makes it clear to the receiver what the sender’s expectations are before they read the CTI. Again, the sender trusts the receiver to honor the TLP designation and any sharing boundaries they have specified.
If you are doing research on the internet on threats, you’ll likely come across documents marked with TLP:CLEAR. For example, both the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) publish threat reports for public consumption labeled TLP:CLEAR. If you weren’t aware of TLP before, these markings will make more sense to you now.
STIX and TAXII
Now that we’ve covered a protocol for use among humans, let’s look at two complementary protocols that enable automated CTI sharing, Structured Threat Information eXpression (STIX) and Trusted Automated eXchange of Indicator Information (TAXII). Employing protocols that are optimized to be processed by machines can help dramatically accelerate the dissemination of CTI to organizations that can benefit from it and operationalize it, as well as across different types of technologies that know how to consume it.
OASIS,” “STIX,” “Structured Threat Information eXpression,” “TAXII,” and “Trusted Automated eXchange of Indicator Information” are trademarks of OASIS, the open standards consortium where the “STIX,” “Structured Threat Information eXpression,” “TAXII,” and “Trusted Automated eXchange of Indicator Information” specifications are owned and developed. “STIX,” “Structured Threat Information eXpression,” “TAXII,” and “Trusted Automated eXchange of Indicator Information” are copyrighted © works of OASIS Open. All rights reserved.
STIX is a structured language or schema that helps describe threats in a standard way. The schema defined by STIX includes core objects and meta-objects that are used to describe threats. The specification for STIX version 2.1 is 313 pages. (STIX-v2.1) Needless to say, it’s very comprehensive and can be used to describe a broad range of threats. To give you an idea of what STIX looks like, below you’ll find an example of a campaign described using STIX.
All the data in this example is random and fictional – it’s provided so you can see an example of the format.
{
"type": "campaign",
"spec_version": "2.1",
"id": "campaign—3a3b4a4b-16a3-0fea-543e-10fa55c3cc2c",
"created_by_ref": "identity—e552e362-722c-33f1-bb4a-7c4455ace3ef",
"created": "2022-07-09T15:02:00.000Z",
"modified": "2022-07-09T15:02:00.000Z",
"name": "Attacker1 Attacks on Retail Industry",
"description": "Campaign by Attacker1 on the retail industry."
}
While STIX is used to describe threats in a standard way, TAXII is an application layer protocol used to communicate that information between systems that can consume it. TAXII standardizes how computers share CTI with each other. Stated another way, TAXII is a protocol designed to exchange CTI between the sender and receiver(s) and enables automated machine-to-machine sharing of CTI over HTTPS. TAXII supports various sharing models, including hub and spoke, source and subscriber, and peer-to-peer. To do this, TAXII specifies two mechanisms: collections and channels. These enable CTI producers to support both push and pull communications models. Collections are sets of CTI data that CTI producers can provide to their customers when requested to do so. Channels enable CTI producers to push data to their customers – whether it’s a single customer or many customers. This same mechanism also enables customers to receive data from many producers (TAXII-v2.1). The TAXII version 2.1 specification is 79 pages and contains all the details needed to implement client and server participants in the CTI sharing models I mentioned earlier.
Threats described using STIX are not required to be shared via TAXII – any protocol can be used to do this as long as the sender and receiver both understand and support it.
A key benefit of using STIX and TAXII is standardization. When CTI producers publish CTI using a standardized schema like STIX, it makes it easier for organizations to consume it, even when they are using technologies from different vendors. If everyone uses the same standard way to describe threats versus proprietary protocols, CTI consumers get the benefit regardless of the vendors they procure cybersecurity capabilities from. In other words, cybersecurity vendors and teams can focus on innovation using CTI, instead of spending time devising ways to model and share it. These protocols help scale CTI sharing to organizations and technologies around the world.
Reasons not to share CTI
Many of the security teams I have talked to opt not to share CTI with other organizations, even when they have good relationships with them. This might seem counterintuitive. Why wouldn’t a security team want to help other organizations detect threats they have already discovered in their own IT environment?
There are at least a couple of good reasons for this behavior. First, depending on the exposure, disclosing CTI could be interpreted as an admission or even an announcement that the organization has suffered a data breach. Keeping such matters close to the chest minimizes potential legal risks and PR risks, or at least gives the organization some time to complete their investigation if one is ongoing. If the organization has suffered a breach, they’ll want to manage it on their own terms and on their own timeline if possible. In such scenarios, many organizations simply won’t share CTI because it could end up disrupting their incident response processes and crisis communication plans, potentially leading to litigation and class action lawsuits.
A second reason some security teams opt not to share CTI is that they don’t want to signal to the attackers that they know that their IT environment is compromised. For example, when they’d find a file suspected of being malware on one of their systems, instead of uploading a copy of it to VirusTotal or their anti-malware vendor for analysis, they’d prefer to do their own analysis behind closed doors so as not to tip off the attackers. Their reasoning is that once they upload the malware sample to an anti-malware vendor, that vendor will develop signatures to detect, block, and clean that malware and distribute those signatures to their customers and samples of the malware to other anti-malware vendors. The malware will also appear in anti-malware vendors’ online threat encyclopedias.
A “best practice” that many malware purveyors use is to scan the malware they develop offline with multiple anti-malware products to ensure their new malware is not detected by any of them. This gives them a measure of confidence that they are still undetected in their victims’ IT environments. However, if at some point they see that their malware is being detected by the anti-malware products they test, they will know that one or more of their victims has found their malware, submitted it to an anti-malware vendor, and are likely investigating further to determine the extent of the intrusion. This is a signal to attackers that their victims can now detect one of the tools they have been using (the malware) and might be on the hunt for them in the compromised environment. As the detection signatures for the malware are distributed to more and more systems around the world, the chances of detection increase dramatically.
Subsequently, many security teams do their own in-house malware reverse engineering and will not share CTI with other organizations, even the security vendors they procure products and services from, until they believe there is no opportunity cost to doing so. This approach gives them the best chance to find and exorcize the attackers before they decide to perform actions on objectives, such as deploying ransomware or destructive wiper malware.