Principles of computer conflict
Fundamentally, I view computer security conflict as a human-on-human conflict, albeit with the aid of technical tools. Automated defenses or static security applications ultimately suffer from being breached by intelligent hackers, and thus the strategy of defense in depth has developed. Defense in depth involves layering security controls so that in the eventuality that a single control is breached, the offensive efforts can still be prevented, detected, and responded to by further layers of controls[8]. This means defensive controls are placed throughout the network to detect attacks wherever they may be in their life cycle. This defensive strategy was developed after years of continually relying on a hardened external perimeter, which continually led to undetected breaches. Now, as the offense develops their strategy to pivot through this infrastructure, the defense will similarly develop a strategy to detect the abuse of and enforce the controls throughout their network. These models of opposing offensive and defensive strategies are popularly known as kill chains. Cyber kill chains are a Lockheed Martin evolution of classic military kill chains, showing the steps an attacker needs to carry out to achieve their goals and the best places to respond from a defensive standpoint[9]. While many parts of this kill chain can be automated, ultimately it is up to humans to pivot, respond to, and control any event that may arise. Kill chains are effectively a model to help visualize attack paths and formulate defensive strategies. We will also use an analog form of kill chains throughout the book known as attack trees. Attack trees are simply conceptual flow diagrams for how a target may be attacked. Attack trees will be useful for exploring decision options in a reaction correspondence and for seeing what either side may choose to do as a result[10]. Using kill chains for high-level strategic planning and attack trees for working out technical decision-making will give us models for analyzing our strategies moving forward[11]. Figure 1.1 shows an example of attack trees mapped to a kill chain from the original paper in which this combination was proposed[11], A Combined Attack-Tree and Kill-Chain Approach to Designing Attack-Detection Strategies for Malicious Insiders in Cloud Computing. In this example, they are showing an attacker installing a network tap to exfiltrate data:
Figure 1.1: Attack trees mapped to kill chains from A Combined Attack-Tree and Kill-Chain Approach to Designing Attack-Detection Strategies for Malicious Insiders in Cloud Computing
While many principles of conflict will remain the same, ultimately this conflict takes place in a new, digital domain, which means that different laws and axioms apply to it, and often knowing these mechanisms better will give either side an advantage. This digital landscape is still evolving every day but is also built on a rich history of technology.
While it was once difficult to find cheap, dynamically scalable hosting and IP addresses on the internet, now multiple vendors offer these services and many more in what is known as the cloud. The cloud is just various virtually hosted and dynamically scaled Linux technologies. This shifting and evolving digital landscape has many rules and laws of its own, many of which will be considered crucial background knowledge for this text. It is expected that readers have a basic understanding of operating systems, executable files, TCP/IP, DNS infrastructure, and even some reverse engineering. One of the beautiful aspects of computer security is that it is a great confluence of so many different disciplines, from human psychology to criminology and forensics, to a deep technical understanding of computer systems. A solid grasp of the underlying concepts is important for computer conflict strategy at the higher levels. You need to know what can go wrong with the system to be able to verify everything is running properly.
Many of the strategies I cover in this book will be considered advanced in the sense that there are basic operational techniques that will be assumed, such as generally knowing how to perform network reconnaissance[12] or a basic understanding of command and control infrastructure[13]. When I cover an assumed technique, I will try my best to link to a resource that conveys what I am assuming. I will also show many examples of the Python, Ruby, and Go programming languages, yet I will not explain the basics of these languages. It is assumed the reader is generally familiar with these languages, which you can find in the References section at the end of the chapter for Python[14] and Go[15].
I won't be using advanced programming techniques in any of the languages, but readers are encouraged to look up basic operators to help understand the programs better. I will also reference many attacker techniques but will often not have the space to describe every technique in great detail. To help further define attacker techniques, I will refer to the MITRE ATT&CK matrix when referencing attacks[16]. This text will also reference as many open-source examples of techniques as possible. In those situations, I will often refer to their GitHub projects, and credit should go to all of those who have worked on those projects. All this is to say, if I mention a technology you are unfamiliar with and do not describe it in enough detail, please Google it on your own as it will help with the overall understanding of the theory or technique I am describing. One reason we study the offense so deeply in computer security is that knowing the attacker's available technical options helps the defender optimally strategize.
Offense versus defense
The game of computer security is fundamentally asymmetric because different technologies, skills, and strategies are optimal for the opposing sides. While we will see that various tools, skills, and strategies exist at the base of both sides, ultimately each side leverages specialized technology that should be specifically accounted for. In the military sphere, the arena is often described as computer network operations (CNO) with two distinct sides, computer network attack (CNA) and computer network defense (CND). We will be referring to these sides throughout the book as offense and defense, and we will define their roles and tools on the network much better throughout the book. While we can draw parallels between their strategies, they are often fundamentally different in how they go about achieving their goals. As a very quick example, the defense will set up multiple forms of monitoring and auditing, using technologies such as OSQuery, Logstash, ELK, or Splunk. The offense will often invest in completely different infrastructure stacks for scanning and pivoting their control, using technologies such as Nmap, OpenVAS, Metasploit, or proxychains as basic examples. It's important to remember that while many of the operating systems and technologies involved can be similar, each unique side will employ very different strategies and techniques to accomplish its objectives. This is also not a zero-sum game in that objectives can be accomplished to a varying degree of success on each side, and each side can be successful or unsuccessful in a certain sense regarding the conflict. For example, the offense can get some of the data they were searching for before the defense expels them, while the defense can also be successful in defending their primary goal, such as uptime or protecting specific data. Just because data is stolen (loss of confidentiality) does not mean the original owner loses access to it (loss of availability); confidentiality and availability are two different CIAAAN attributes.
This means that if a defender cares most about uptime or business continuity, they could be breached, have their data stolen, expel the attacker, and still consider it a partial win from a defensive perspective. Throughout this book, we will examine how different strategies target different CIAAAN attributes to achieve their end goals, from both of the unique perspectives of offense and defense.
The defensive team is defined by the role of protecting the data, network, and computing available to the organization. It is most often referred to as the blue team, the incident response team, the detection team, or even just the security team. Their main method of determining nefarious activity on their network is often through setting up elaborate systems of centralized monitoring and logging tools throughout their computing environment. Typically, they have some level of management interface to their environment or fleet, such as SCCM on Windows, Puppet, or Chef in general. This level of host management will allow them to install and orchestrate more tools to set up their monitoring. Next, they may install or utilize tools to help them generate richer logs, more security-relevant logs, such as OSQuery, AuditD, Suricata, Zeek, or any number of EDR solutions. Next, they would install log aggregation tools to help bring all of this data back to a central location. These are often tools such as filebeat, loggly, fluentd, or sumo logic, tools that collect logs from around the network for centralized correlation and analysis. Finally, the blue team is ready to detect nefarious actions on their network, or at least understand when things may be going wrong. In an incident response situation where external consultants need to come in, it will often be a more aggressive and shorter timeline than that already described. External incident responders will come in with ready-made scripts to deploy their tools and simply begin collecting forensic logs from all the hosts they can. In-house defenders have more time to set up richer monitoring, and we will see that this will give them an advantage in the conflict. One advantage that external consultants often have is they may have unique intelligence or tools from responding to many similar incidents. As is often the case, these battle-hardened consultants may be better equipped with both tools and expertise than the in-house team, and it makes a huge difference. Regardless of the source of the defense, their mission is often the same: protect operations and expel any potential threat or offense.
Offense, on the other hand, is defined as the aggressor in this situation, the group attacking the computer systems. They can be a red team, a team competing in a competition, or even a real adversary, really any group or person attacking a computer network. However, this book is not for your typical red team or pentest team. What will unite the offense throughout this book is their use of guile and deception to gain the advantage. The tools used in these types of attack and defense competitions are not always the typical penetration testing tools. Just as not all vulnerability scans are pentests, and not all pentests are red teams, not all red teams are well equipped or have the right set of skills for this out of the gate.
We will be using a number of tools to obfuscate, persist, and even mess with the blue team, not something your average red team does. Even some adversary emulation tools are not up to par, as they will have some type of tell or work in a restricted manner. One of the conference talks that best embodies the spirit of this book or the red teams I imagine in this book is Raphael Mudge's Dirty Red Team Tricks[17]. A lot of the techniques he covers in that talk are from the National CCDC Red Team, so we will see a lot more content like that throughout this book. It is also important to keep in mind that this is not necessarily purple teaming. Purple teaming is when a red team and a blue team work together in tandem to improve a blue team's ability to detect various techniques. In purple teaming exercises, both teams are essentially working together to generate more high-fidelity alerting. In a purple team, the red team's goal is to paint a target by emulating a threat and help the blue team hit that mark by detecting the emulation. Here, we will discuss ways for the offense to gain the upper hand or for the defense to recognize and counter their opponent's plan of action. The strategies we will discuss in this book are to give one side or the other the advantage in a conflict. It is an important distinction to keep in mind as you read. This will also allow us to explore dirty, underhanded, or deceptive tricks that would likely be off-limits in a purple team exercise. I do think purple teams can learn a lot from reading this text and studying the strategies we discuss on both sides, but it is important to note that this is fundamentally not purple teaming.
There are many different strategies in the field of cybersecurity, both on offense and defense. Each of these strategies often comes with a tradeoff, even if that tradeoff is the complexity of the technique. While advanced strategies can excel in a particular scenario, for example using process injection when there are no EDR or memory scanning technologies at play, they sometimes come with drawbacks against a further reaction correspondence. Process injection is a great example of a technique that excels at disappearing from traditional forensic log sources, but when you are looking specifically for process injection techniques with capable tools it tends to stand out from other programs. We will take a deeper look at the reaction correspondence around process injection more in Chapter 3, Invisible is Best (Operating in Memory).
To take another example on the defensive side, there is a prevailing notion of endpoint security, or moving the bulk of the detection logging activity to the host. This could help detect endpoint compromise and recon from the endpoint, as well as detect memory injection and overall privilege escalation techniques. This reaction correspondence could then make using the technique of process injection less desirable because the attacker would lose some confidentiality and the defender can gain non-repudiation in the new scenario. We will cover this reaction correspondence specifically later in the book. This also goes against an older defensive strategy that was very popular a few years ago: network-based defense.
Endpoint-focused defensive controls can help you in a large decentralized network, such as the modern working-from-home environment, whereas a network-based strategy was designed to help you uncover and detect the new endpoint compromise possibilities you may not know about or endpoints that are not managed in your environment. These tradeoffs in defensive strategies are evident, both possessing unique opportunities and blind spots. We will cover both strategies throughout the book, showing where each excels and lacks in a particular scenario. A network-based defense can help normalize traffic and provide additional controls like deep packet inspection, whereas endpoint-based defense can provide on-the-fly memory analysis. Each one offers different benefits and comes with different performance tradeoffs. Throughout this text, we will explore how different strategies exemplify various principles or can remove the core elements of security from the opponent, giving certain defender strategies a clear benefit versus popular attacker strategies.
Similarly, from the offensive perspective, two exceedingly popular strategies exist for lateral movement: low and slow when moving around the network, or aggressively compromising and dominating the network. While being highly aggressive can work in a short-lived engagement cycle, such as an attack and defense competition or a ransomware operation, it is generally not a good long-term strategy as it will alert the defender to your presence. We will examine some scenarios where an aggressive offensive strategy can be successful, but generally, we will see the defender dominate these scenarios in the long run as they will have physical access, and thus completely control all availability and integrity in the infected hosts. The average pentest team tends to fit the same profile as a highly aggressive threat actor as they simply don't have the time to budget for stealthy threat emulation and defense evasion. We will also examine a few short-lived scenarios where the attacker can dominate for a short period of time or buy themselves access for a little longer, such as in attack and defense competitions. In some of those short-lived scenarios, the attackers may even induce havoc on the network to cause disruption, but make no mistake, these are planned routines, and they are not flailing around or trying random attacks. After those examples, a lot of this book will focus on various low and slow offensive strategies, showing attackers how to hide and deceive their opponent into thinking they are not compromised. These threat profiles better fit internal red teams and real threat actors as they can budget the time and expenses to go through the reaction correspondence with the defensive team. We will explore several advanced low and slow strategies that focus on deceiving and hiding from the opponent. By subverting the defender's controls, the offense will be able to operate for longer and with more freedom, knowing they are not being detected with routine hunting operations. Similarly, the defenders should learn how to recognize these signs of deception and sow their own seeds of disruption. From a defensive perspective, it is far better to hypothesize attacks, model these scenarios, and play-test response plans to identify your own blind spots before the attackers arrive.
A lot of my experience here is drawn from over eight years of attack and defense competitions. I have played in up to four of these competitions in a single year, along with numerous other capture the flag (CTF) competitions and red team operations for my day job. These real-time attack and defense competitions have been a major part of my last decade and are quite different from a traditional cyber CTF. Attack and defense competitions can be thought of as a real cyber wargame, where one group defends a computer network and another group attacks that network. Each competition normally has a different implementation of these core rules, but the game is generally a group of people on each side trying to defend or attack certain data on a given computer network. These events can be extremely competitive, where sides can sometimes game the game, but often there is a complex series of rules and escalating strategies, played out in real time. The tools are often vastly different from traditional red teaming or CTF tools, with these tools focusing on command and control, persistence tricks, and even trolling. This experience is vital because it offers a time-boxed conflict where sides can explore various offensive or defensive strategies in a zero-consequences environment. They can then iterate on these experiences and develop their strategies in quicker loops than real engagements. This means sides can be creative and try different strategies to explore those given tradeoffs in a real conflict scenario. Furthermore, my experience here is drawn from real-life incident response investigations, where attackers have been deceived into making mistakes or revealing their identities, resulting in them being expelled from the environment and brought to justice in some cases. These real conflict scenarios generally require a longer time to incorporate the lessons learned, such as several months to a year to see feedback, in contrast to short-lived, weekly competitions. While I generally have extensive red team and purple team experience as well, I think that is less directly applicable as any advantage achieved there is usually limited for the benefit of the customer. While many of the common skills and tools for identifying and exploiting vulnerabilities are the same, these are just a means to our end goals on the offense in an attack and defense competition. Our true goal on the offense is to persist undetected in the environment while accessing our target resources for as long as possible, for which we often use tools that are not used in a traditional pentest. While these tools can be part of threat emulation frameworks, operators need to be intimately familiar with the techniques on their own. I mean to say that penetration testing is often fundamentally different from these attack and defense competitions because the motivations and outcomes are not always the same as in direct competition. I suppose it depends how competitive the red teaming can get, but I would generally bucket red teaming and purple teaming into different categories as I typically do not see them going to such extreme measures as we will explore in this book. That said, real cyber conflict experience is invaluable in this arena.
I am most familiar with CCDC, or the Collegiate Cyber Defense Competition, where I have been on the national level red team for eight years, competed in over a dozen CCDC events, and led the red team in the Virtual Region for three years now. In CCDC, college students play as dedicated network defenders, and our team of volunteers plays the offense in an attack and defense competition[18]. The network environment is often unknown to both teams, and both teams start at the same time. This gives an advantage to the attackers as they can scan the infrastructure and pivot to exploiting vulnerabilities quicker than the defensive teams can access, understand, and secure each individual machine on their newly inherited network. Still, the defending team often evicts the attackers by the end of the competition and gains the upper hand over the next 48 hours. The national-level red team for CCDC consists of some of the best offensive security engineers from around the United States, each bringing signature techniques and tools that have been honed over years of playing in this event. On the national red team, I write and support a few tools, including a global Windows malware dropper we used for a number of years. This Windows-based implant has gone through several evolutions, from using script-based languages such as PowerShell to using custom loaders, individually encrypted modules, and loading further implants into memory. We have also drastically expanded on our covert command and control channels that we use in our backdoor infrastructure. This CCDC competition was part of the inspiration behind the tool Armitage, which became the popular post-exploitation framework Cobalt Strike, written by Raphael Mudge on the CCDC red team in 2010[19]. We will look at this evolution of tools and show how some strategies routinely outperform others even when under the direct scrutiny of the blue team. In this book, we will discuss a number of strategies both sides can use in such a conflict, many of which I have seen used firsthand.
I am also familiar with playing both sides of the spectrum from playing in Pros V Joes (PvJ), a popular attack and defense competition run at various American BSides security conferences[20]. I've played in PvJ for over five years, three years as a Joe competitor on a team and two years leading a team as a Pro. PvJ is unique in that each team has a similar network to defend but can also attack the other team. The scoring in the game is based on your own team's uptime, so it is far more important to play defense than offense. There are typically four teams and about eight services each team needs to support throughout the competition. Each team has roughly the same network, ten players, and two days of game time, with points going to your team for uptime, solving business injects, and with points getting taken away when you have an active compromise scored by another team. On the team, the actual roles of defense and offense in terms of what players in these positions will be doing are unique and independent to their roles. For this reason and a few others, I like to have my team focus on defense first, and offense when we have the opportunity for spare cycles.
Generally, I will split my team 80/20 with the bulk of my team working defense in terms of expertise and preparation time (which we will spend a lot of time discussing in the next chapter). There are a few reasons for this. Mostly, it is harder to work back a compromise than to attack and still find vulnerabilities later in the game. That means if we focus on defense upfront, we can shift more people to offense later if we think we are reasonably secure. The team in PvJ wants to be reasonably assured we are operating from a position of security, otherwise our own attacks and offensive operations can be easily thwarted if we lack confidentiality in our actions and infrastructure. Playing in PvJ or any attack and defense competition for that matter can be very stressful, as you are simultaneously trying to secure your environment while responding to live attackers on your systems. This oversaturation of tasks and lack of resources to do it all is one of the core tenets of these attack and defense competitions, and a major reason we will focus on strategies that put your opponent (the offense) on the back foot even when they compromise a server. Any time you can buy between when a server gets compromised and the attacker pivots to their goals is critical time you can use to detect and respond to the attacker before they can make a major impact.
Finally, aside from running many red team operations throughout my career, I have been involved in numerous incident response engagements with real threat actors. I tend to view real incident response conflicts as more closely related to real offense versus defense than traditional red team exercises (the gloves are normally still on when running legitimate red team operations). Real incident response is often a no-holds-barred type of competition in which the stakes are very real: getting data or assets stolen as the blue team and getting expelled or facing legal repercussions for the red team. Real incident response operations often involve highly competitive tricks to get an advantage over the attackers and bring them to justice, which we will explore throughout this text. Such gloves-off techniques may involve using honey pots to catch the attackers, reverse engineering the attacker's tools to find errors or vulnerabilities in them, and even hacking back the attacker's infrastructure to gain more intelligence on their operations. These gloves-off operations will be the majority of what we explore with this book. This means getting the advantage over your opponent, sometimes in an unfair way, and leveraging this advantage to win the game. For example, the defense or blue team would likely not backdoor their own code base for a red team exercise, but they might if someone was routinely stealing their code and they wanted to covertly discover where the code was being compiled or run. Such techniques do not really fit in an exercise where the end goal is to harden the environment or increase the organization's overall security insights. However, these techniques can shine in a real conflict and sadly are often overlooked in the industry as available options. Many red teams don't focus on the malware or the same tricks of the trade that real attackers focus on. This text will focus on the more devious tricks that both offense and defense can use but often don't unless in a real conflict.
Next, I am going to touch on several principles or themes that I will be referencing throughout this book. The Oxford English Dictionary defines a principle as "a fundamental truth or proposition that serves as the foundation for a system of belief or behavior or for a chain of reasoning." I am going to put forth several principles or themes that will be leveraged throughout this book in our various strategies. These principles exist on both sides of the game, and they can lend the advantage when leveraged effectively and limit your opponent's available options at any given time. None of these are required to carry out operations; however, if you use them in your operations you are likely to gain an advantage by adhering to them in some way. While these themes are not foolproof, they can be used to trick or overpower your opponent in a conflict. These principles of computer conflict will help us analyze our strategies and lead us toward dominant positions in a network conflict. I'm sure people can find exceptions to these principles; these principles are not laws. The digital environment is simply too complex and there are too many variables at play, but I encourage you to think about the principles critically to see how they can be applied to your operations.
Principle of deception
Like in all conflict, the ability to trick or deceive your opponent can give you a great strategic advantage. As Sun Tzu famously wrote, "all warfare is based on deception"[21]. This principle is generally applicable to conflict as a whole, not specifically computer conflict, but we will see many of the techniques highlighted in this book exemplify this principle. Many books have been written on the principle of deception in conflict, both throughout history and from different cultural perspectives. Routinely, where guile is used, civilizations triumph in their battles. The element of surprise and the ability to make sure you are not being deceived are crucial to all forms of conflict or competition. It should come as no surprise that deception is used in computer conflict as well. In this book, we will explore some specific technologies and examples that champion the concept of deceiving your opponent, especially in such an asymmetric game. The examples of computer deception we will see will range from common and non-technical to highly technical and low-level; techniques you traditionally may not think of as deception at first that exemplify the concepts I've highlighted here. Throughout this text, we will be borrowing a few of Sun Tzu's philosophies to use in these ideas, but make no mistake, The Art of War holds little value in terms of computer security. The landscapes have simply changed too much, so most of The Art of War does not really apply to the digital domain. Nonetheless, we will still borrow concepts such as "avoiding the opponent's strengths and targeting their weakness," in terms of the areas we choose to operate in as an attacker.
While it's not always the most glamorous approach, if you can avoid the opponent's strongest tools or operations you can win the battle by forcing your opponent into territory you are more comfortable with. On the offensive side, this could look like avoiding or using different techniques on a host if it has an EDR agent on it. On the defensive side, this could mean limiting all outbound traffic or sending it through a proxy to hinder egress connections out of a certain area. These examples are less active deception or manipulating the enemy's perception of the conflict, but they are still ways to force the opponent to meet you on your terms while avoiding environments or situations where they have the advantage.
Barton Whaley, who studied the element of surprise and deception throughout his career, defined deception as "any information (conveyed by statement, action, or object) intended to manipulate the behavior of others by inducing them to accept a false or distorted perception of reality — their physical, social, or political environment"[22]. This book will aim to add digital or cyber to that list of environments. We will show prime examples where either side can manipulate the opponent's perception of digital reality, either tricking them into not seeing the operations or by having them operate in a fake and highly monitored environment. The book Deception: Counterdeception and Counterintelligence by Robert M. Clark and Dr. William L Mitchell goes on to elaborate on these definitions: "Deception is a process intended to advantageously impose the fake on a target's perception of reality"[23]. Clark and Mitchell also highlight why and when to use deception, writing the following, "One does not conduct deception for the sake of deception itself. It is always conducted as a part of a conflict or in a competitive context, intended to support some overarching plan or objectives of a participant"[24]. Clark and Mitchell go on to specify how deception is critical to cyber operations, later writing, "Cyber deception is like all other deceptions; it works by hiding the real and showing the false"[25]. Clark and Mitchell give an example of a honey pot, where the defender has created fake infrastructure with the goal of luring the attacker in and revealing themselves. We will cover this example specifically, and many more where each side in this conflict can leverage deception to gain a substantial advantage in the conflict. In Barton Whaley's Toward a General Theory of Deception, he covers two categories, showing the false and hiding the real. This theme of both hiding the real and showing the false is prevalent throughout deception literature. We will see examples of both highlighted throughout the book, such as defensive obfuscation as an example of hiding the real, where we add layers of unnecessary computation to protect our tools from analysis. This simple form of deception will be used heavily throughout the text, considering obfuscation as a best practice in most of our operations. We will see showing the false when we look at purposely vulnerable infrastructure that is designed to detect or lock an attacker out.
We will also see hiding the real, where rootkits are used to hide the real operations from the rest of the operating system. In Kevin Mitnick's The Art of Deception, he tells multiple stories (albeit slightly fictionalized) where companies get hacked through little more than social engineering, deceptive tactics, and something we will see later in this chapter known as the principle of humanity or abusing human access to computer systems[26]. As we can see, the use of deception in conflict has a proven benefit throughout history. The principle of deception is essential to conflict. Furthermore, deception has a documented use in computer hacking, albeit not a well-examined one. We will examine this relationship more closely in terms of cyber strategy; when harnessed correctly, deception at a strategic and technical level can give a tremendous advantage over an unsuspecting opponent.
We will discuss the use of obfuscation to aid deception throughout this book, but it should not be misunderstood in a security context. Obfuscation is not a replacement for encryption or solid security foundations that protect the elements of confidentiality, integrity, availability, authentication, authorization, and non-repudiation (CIAAAN). When we use obfuscation, we will use it as an additional layer to actively protect our tools and operations, while still making sure we have the basic controls, such as encrypting our communications. We will use obfuscation on both sides as a form of camouflage to hide our security operations from general scanning or exploration. While obfuscation should not be relied on for security, we will be leveraging obfuscation wherever possible to help make our tools harder to analyze. Both from a defensive perspective as well as an offensive perspective, by making the tools hard to analyze and reverse engineering we make them harder to exploit and disable. In normal security discourse, we hear regularly that obfuscation is not equivalent to security, and while this remains true, we will be layering obfuscation on top of all the techniques we can. The use of obfuscation is part of our principle of deception in that we are trying to hide what is real, but we will also use it as a general defensive technique in that we are using it to harden all our tools.
Principle of physical access
Physical access is a vital principle to remember in terms of computer security. Generally speaking, whoever can get physical access to a device can achieve an ultimate level of root control by booting the device into single user mode, forensically accessing the disk, or even powering off or destroying the device. This level of physical access can certainly be countered to a degree, for example by using full disk encryption, or by locking your servers in a server case. Ultimately, as an attacker, you must remember that no matter how deeply you compromise a device, the defender has some level of ultimate root control if they physically own the device.
This means the defender can forensically analyze the device, pull it off the network, or even shut it down and reimage it. The principle of physical access is a key theme to remember and reminds us that physical security often trumps digital security in terms of the hierarchy of needs. You can also extend this principle to management interfaces. Root access to a machine is great, but what if it turns out it is a virtual machine? Root or administrative-like access to a management interface like AWS[27] or ESXi[28] can be just as powerful as physical access. Physical access to those cloud servers would still trump access to the management interfaces, for example by dumping raw process memory. This is the principle of physical access, showing escalating root control around physical ownership of the computing devices.
The attacker can still get to a sweet spot here by compromising a user and obtaining root access on their machine. Often, the user will not be security savvy and the offense will have an advantage in this situation, masquerading as the user and gathering more information on the network. So long as they do not draw the attention of the incident response (IR) team, they are in a dominant position and they can manipulate the user and any automated security notifications on the endpoint. If the IR team responds and they have kernel-level control, the ability to segregate the host on the network, or even the ability to do dead disk forensics, they will often gain the upper hand and a dominant position again, having removed the attacker's ability to exert control over the device. The attacker can block the defender's network access with a series of tricks we will explore, but this will only buy the attacker limited time till the defender can physically respond to the device. Once the defender or one of their agents can physically reach the device, they can power it off, pull it off the network, or even perform any form of forensics. Live forensics can be chosen in an attempt to get running artifacts logs and application memory, or dead disk forensics can be used to ensure the attacker has lost all control. Live forensics is when the defender responds to the machine before powering it off[29]. While this can be done in a number of ways, sometimes with the attacker still in control of the host, modern EDR frameworks can both quarantine the host so only the defender can access it and respond to the machine live. Often a combination of live forensics and dead disk forensics is done after the device is removed from the network, or quarantined, ensuring the offense has lost all forms of remote command execution. The group with physical access can always supersede the opponent by removing access via the network or the power supply. Granted, they may not always be able to access the data they want after removing access; it may be encrypted or perhaps was only available temporarily in RAM.
The corollary to this principle is that physical security almost always trumps digital security except in terms of reach. No hacker is above kinetic response, legal response, or even foreign response. Likewise, if a server exists somewhere physically, it can be seized. Data housing locations should be physically secure, preferably in a data center.
Likewise, operating locations should be physically secure and, if needed, anonymized to prevent data gathering. It is highly unlikely an actor would resort to a physical escalation, but preparing for the event will also help curb crimes of opportunity. Ideally, rules and/or scenarios would keep the digital conflict in the digital space, but this is an obvious escalation you cannot overlook. Data encryption is one of the strongest tools available for physically protecting data at rest. Because physical security is such a trump card, all hard drives should be encrypted at rest using industry standards such as LUKS, FileVault, or Bitlocker. Furthermore, it helps to password protect or encrypt cryptographic keys and passwords your group needs to store.
Principle of humanity
Each side has computer systems that are generally in use by other humans. As Matthew Monette puts it in Network Attacks and Exploitation: A Framework, "CNE is grounded in human nature. The attacker is a person or a group of people. The attacker may be a lone actor, a well-ordered hierarchy, or a loose conglomeration of thousands, but regardless the attacker is human"[30]. This means those humans are susceptible to being tricked, deceived, and compromised and are prone to error. I tend to combine this principle with Monette's Principle of Access as well, where he states that "because data or systems must be accessed by humans, the attacker knows there is always a viable path to their target data"[31]. To me, that is the corollary to all computers and data being tools of humans and thus accessible through human means as well as technical means. This will show itself often in two ways throughout this text: human error (catching the other side making mistakes and exploiting those mistakes) and compromising the human element of the computer systems or mimicking normal computer usage.
We will routinely see capitalization on human error throughout this text through active deception, via hiding amongst the existing complexity, through dangling juicy lures, and even through critically analyzing the tools of our opponents. I will make a point of highlighting where human mistakes can be taken advantage of, from simple things such as configuration errors, typos, and password reuse to larger organizational mistakes such as leaving management interfaces or operational infrastructure exposed. An example of leaving management or operational infrastructure exposed could be exploiting a team's testing infrastructure before it is secured or in the event that they forget to update it. When properly exploited, leveraging your opponent's mistakes can give you a huge advantage. We see this played out when we exploit the principle of planning. On one hand, planning further enables the team to create repeatable operations and playbooks, while on the other hand, if these wikis aren't secured properly it can lead to grave information disclosure and tip off their opponents to the tools and tactics at play.
Keeping these plans confidential is one of the core tenets of information security, as is making sure their integrity can be verified and the plans are generally available when the team needs them. To help counter human compromise or administrative compromise, we will also explore alternative means of access, alternative means of communication, methods for out-of-band verification, and multiple ways to authenticate members and actions. These contingencies can help alleviate the burden of the principle of humanity, allowing a compromised team to shift and re-establish a secure operating space quickly and effectively.
I have heard people say red teaming is as much about physical security and social interactions as it is about the technology and digital security at play. Like any secret agent movie, if you can sneak into the operating room or get the password from the employee at the bar, then you may not need to hack the server. Chris Nickerson of Lares used to have a slide in a presentation that described red teaming as a blend of physical, social, and technical expertise to create threat emulation[32]. If the physical aspects are covered on the principle of physical access, then the principle of humanity covers the social aspects of the threat. While all of the techniques we explore will fundamentally be about hacking some technology, we must not forget this principle or the human aspect of these computer systems. Abusing the principle of humanity is akin to taking the front door and having the organization or application accept you for another user. After the compromise or exploitation, the victim may have lost some attributes of authentication and/or authorization.
Principle of economy
It is obvious but worth stating that both sides have limited resources; there is only so much money, talent, consulting, or effort a group can afford before it no longer makes sense. Under this premise, all security operations and defensive operations are a long game of survival. Simply put, you cannot throw an unlimited budget at everything; each side will have to plan out and budget their resources. One of the most limiting resources on both sides is often time. The attacker has a limited time they can remain hidden and every moment adds increased operational risk for them. The defender has a limited amount of time to set up their defenses. No defense is ever perfect when the attacks happen, but as Donald Rumsfeld said, "You go to war with the army you have, not the army you might want or wish to have at a later time." The defender also has a limited amount of analyst hours they can devote to reviewing logs or alerts, standing up new infrastructure, or conducting incident response operations. These limitations are constraints both sides must operate within, choosing where to devote resources based on their current strategies.
That said, large organizations often have the benefit of investing more money for more resources, both in technology and talent. That is, you can do alchemy by buying time with money, or hiring more people to get more man hours. Granted, man hours do not perfectly scale horizontally. That is the lesson from The Mythical-Man Month on software engineering, which is that adding more people to a technical project simply does not make it go faster; in some situations, it can even slow it down[33]. Scaling in tech must be done strategically. A lesson I will repeat throughout this book is that quality over quantity, in terms of technical expertise, will pay exponentially downstream. That is to say, paying a more expensive and highly skilled engineer is sometimes a better strategy than hiring a few less skilled engineers. We will see that this principle relates heavily to the principle of planning as well. For example, having plans on how to scale the operation as well as how to operate at a tactical level will keep operations growing and running organically.
Expertise is also a seriously limiting factor in the cybersecurity landscape. The ability to develop new capabilities or operational expertise cannot be understated. Talent and expertise are often defining factors and will act as a force multiplier to all subsequent operations. You will want to capture your team's collective expertise in codified platforms, runbooks, and operational procedures. There are many different types of expertise, so you cannot bucket them all into a single category, and they all provide unique value for your team. For example, software development expertise is vastly different from exploit development expertise, reverse engineering expertise, or even general incident response expertise. You should strive to have experts in each area you are operating in, with a focus on quality over quantity if you can influence it. Ultimately, this means team building, prioritizing, and training resources should be paramount concerns. A certain level of expertise and talent should be required by the team for each endeavor it undertakes; a single weak link or poorly performing aspect of the program could bring the entire team down in a conflict. When looking at how to equip your team with expertise, remember that cross-functional expertise can be incredibly important, and sometimes acts as a force multiplier. Another core capability in terms of resources here becomes project management, making sure projects are on schedule, meet budget, are sufficiently staffed, and are not over-resourced.
Principle of planning
The word strategy means a plan to achieve a higher-order objective. Whether experts want to believe it or not, they often have many plans already constructed mentally, just not written out. I urge readers to write these plans down and practice them with their teammates; this will work out any kinks or blind spots in the plan. Sun Tzu talks about planning throughout his book, saying "Plan for what is difficult while it is easy, do what is great while it is small."
Writing the plan out is one step closer to pseudo-code, which is one step closer to coding out the plan, and finally automating the actions as a tool. This practice of automating your team's techniques will not only level up the entire team but will also solidify the group's operations, making them more consistent. Also, the existence of code does not mean projects can forgo documentation. Complexity is the enemy of operations, so documents should be straightforward and designed to help operators or developers rapidly seeking more info. Planning should also incorporate high-level strategy. This will help with analysis or development paralysis. If plans are clearly laid out from a strategic point of view, and run books exist to help the team accomplish any technical tasks, they should be free to operate organically. Ideally from a computer science perspective, you would want to codify and automate as much of this as possible. From this perspective, having team members dedicated to tool or infrastructure improvements makes a lot of sense economically. Tool development roles may seem extraneous on an operational team, but their real value pays dividends by codifying and automating the team's methodologies. This is a great way to level up everyone on the team by programming or curating common tools for people to use. It is also not enough that the team just uses the same tools; they should know how the tools work at a basic or deeper level. Often tools will hit corner cases, or as we will see later, can be deceived, so it is important to understand how the tools fundamentally function, and how they can be subverted. Having subject matter experts for each of your tools or processes is a great way to diversify the team's expertise and responsibilities.
Planning and having runbooks will give operators on either side a definite advantage. The field of computer security is extremely complex and cyber-attacks will move very quickly and subtly, as we will see in later chapters. Your team must be equipped to know exactly what to do and how to recognize or analyze certain signals to react properly. This often means creating lists or operational security guides to demonstrate and review techniques that can be referenced to minimize human error. In the US Army Field Manual 3 (FM-3.0), they have a principle of war called simplicity that states making simple, straightforward plans that any level can follow at any time will enable coordination and help reduce errors and confusion[34]. Canadian principles of war also explicitly mention Operational Art and Campaign Planning as one key principle, which shows the focus on and relationship between strategic planning and tactical execution. If the planning can permeate down to the tactical level, operations can be carried out with lots of edge cases or errors removed. This allows both scaling human operations and maintaining quality at an operational level[35]. These levels of planning also require training regular personnel on the planned strategies and routinely practicing the operations.
This level of planning and training will assuredly drive operational expertise. Additionally, the Canadian Forces Joint Publication on Operational Planning calls for contingency plans to be developed in each of the planned strategies. It is not enough that you have a plan to hunt or harden, but you need contingency plans on top of them, and you need to practice the response when the plan takes an unexpected turn. This level of planning should keep plans flexible. Plans are expected to go wrong or deviate; they exist as tools to help operators take action, but ultimately operators should be empowered to diverge and make decisions where they see fit. As Mike Tyson aptly put it, "Everyone has a plan until they get punched in the mouth." It is important to remember to keep plans simple and high level. Remember, these are tools to help guide experts to make sure they are not forgetting edge cases or making mistakes. By keeping plans simple, they can also remain flexible, and you can easily plug and play operations if they have similar processes and boundaries. If you keep the planning simple and atomic you should also be able to create more plans and iterate on them. It is not enough to create a plan once and forget about it; plans need to be maintained. The plans should be living documents, easy to edit by any team member, and reviewed once a week or once a month so everyone is familiar with the changes as the plans evolve.
Often, experts are turned off from the idea of codifying their operations or creating checklists. In The Checklist Manifesto, we can see how several high-performance and high-complexity fields were transformed by the simple act of using checklists, and the idea for runbooks follows exactly that[36]. By planning out actions and contingency reactions we can give operators of either offense or defense a significant advantage. This book aims to cover several scenarios and provide runbooks to gain the advantage or break a perceived advantage in the conflict. Before an operation starts, make sure you plan how it will or can end. As an attacker, it can help immensely to plan out your operation life cycles from the start. For example, on the CCDC red team, we program kill dates into our malware such that many of our implants will not work or kill themselves after a certain date. This is to both reduce the unintentional spread and to stop unintended post-game analysis. This can also be seen with the principle of time, but you have to assume your obfuscation or controls will be breached eventually, and plan accordingly for when that contingency happens. Questions about the end of a campaign, such as "do we have a way to decommission infrastructure or accounts that are no longer used?", are important to ask at the beginning. While this simple planning may seem like good hygiene, it can actually save you from critical mistakes down the road.
Principle of innovation
Computer science is a staggering tower of complexity, innovations, and abstractions built on the shoulders of prior abstractions, several times over. Merriam-Webster defines innovation as "a new idea, method, or device; a novelty." The sheer complexity of computer security allows for lots of room in terms of innovation by simplifying, combining, or exploiting existing processes through automation or tools. This can be as simple as writing a tool on the offensive side or finding a new log source or forensic artifact on the defensive side. For whatever reason, it is often the offense that innovates faster, picking up and trying new things to see what will work. This is probably because more infrastructure and planning is required on the defensive side, so by its nature, it is slower to change and implement new strategies. Specifically, with the offense, we see lots of new exploits being released weekly, with varying degrees of advanced disclosure for patching. These innovations of 0-days or even n-days, whether second-hand or sourced via one of the groups in the conflict, can add lots of uncertainty to the defense and have spawned strategies such as defense in depth[37]. These innovations can give either side a tremendous advantage by rapidly exploiting or changing the landscape, sometimes unbeknownst to the opponent. Innovation can come in many forms, as we will see, but often requires an upfront cost of talent and time in the form of research. Sometimes innovations can be crowdsourced or obtained via public research. For this reason, it is good to have current intelligence on the threat landscape. This innovation can also come with unforeseen drawbacks such as bugs in the process or code. That said, simple technical innovation can also result in changing the tempo or landscape of the conflict for either side. One example that comes to mind is FIN7 using shim database stubs for persistence[38]. While this technique initially gave them an undetected persistence method, it was later analyzed, parsed, and the evidence was exploited to the advantage of the defenders[39].
We will revisit this principle throughout the book, but especially during Chapter 7, The Research Advantage, which is largely about capitalizing on this principle to gain a dominant position over your opponent. Reverse engineering is an amazing expertise to have for this purpose. This skill can help you triage binaries, extract indicators from malware, or even find vulnerabilities in applications. Operationally, reverse engineers can help you learn more about your opponents in a conflict by analyzing their tools. During the planning phase, this skill set should not be overlooked, as we will see how crucial it will be throughout the book.
Analyzing the opponent's tools is critical for both sides. On the defensive side, it can help with clandestine analysis and/or attribution. It can also help find forensic artifacts the attacker's tools leave behind or vulnerabilities in their tools themselves. On the offensive side, the advantage is obvious for developing exploits for software in the target environment. The offense can take it further; by scrutinizing the tools defense use for detection, the offense can exploit, disable, or circumvent these tools (which we've done in the CCDC competition against some of the custom detection tools, such as The University of Virginia's BLUESPAWN[40]).
Experts in information security often say assume breach or assume compromise, which is the principle of innovation and the principle of time at play. The human ability to hack static technologies is so profound that industry experts often claim nothing is unhackable, which is to say the human mind is so great that it can eventually overcome any static defense or tool. In a way, these principles are what helped spawn defensive strategies such as defense in depth. Throughout this book, we will revisit these principles in our strategies to see how we can leverage innovation in a reaction correspondence to gain a dominant position. Innovations can be simple or highly technical; what matters is changing the tempo of the conflict or gaining a surreptitious advantage over the opponent.
Principle of time
The renowned samurai Miyamoto Musashi once wrote, "There is timing in everything. Timing in strategy cannot be mastered without a great deal of practice"[41]. This principle of timing has a lot to do with the principles of deception, planning, humanity, and innovation. While this principle could just be mentioned as a footnote along with the others, I think it is important enough to warrant its own section, especially in the context of attack and defense competitions. I think the highly limited context of a competition's time frame can be a really important factor to consider in terms of avoiding initial waves of compromise, being able to focus on an adversary you know is there, or being able to cause just enough havoc until the time runs out; luxuries that don't exist in the real world. Still, I think there are several corollaries we can draw that will give an advantage in real-world conflict when we consider the principle of time.
All computer conflict is based on time in several ways. Encryption security is often thought of as a function of time in terms of the amount of time until a certain key can be brute-forced. The very idea of computational security is security as a function of time, really representing how long a certain thing can be thought of as secure until a reasonable adversary could have broken it[42].
As an attacker, when you see old software or machines that have not been patched, you probably assume there are vulnerabilities or weaknesses there if you search hard enough. When you abstract this principle to nature, when things remain static for long periods of time, they tend to decay or develop weaknesses. This principle shows us many things. One is that systems become deprecated over time and require resources to maintain them securely. Another is that timing can be key to the strategies of both offense and defense. Ultimately, our principle of time states that with enough time anything can be hacked, and any defense can be overcome. Thus, security exists within the bounds that it is supported and well resourced, and that is only ever for a limited time.
Sometimes, as a defender you may want to wait and watch the attacker, letting them take action before you contain or evict them to help understand their motives and targets. Once you identify an adversary on your system, their time is limited. At this point you can hunt the attacker down and evict them; however, using the element of surprise or deception to your advantage can make the eviction much cleaner. Unless the attacker can hide again or there is a form of persistence or compromised machines you have not yet identified as the defense, then the attacker is essentially on a shot clock, a limited amount of time before their game is over. The defender must be very careful about the timing of when they let the attacker know they are aware of them; the defender must have fully understood the scope and depth of the compromise at this time to properly evict the attacker. We will go into this in more detail in Chapter 8, Clearing the Field. Monitoring your opponent allows you to reverse engineer their implants or exploit their sloppy work, potentially gaining attribution or insight into the full scope of the compromise. Rather than playing whack-a-mole with compromise around your network, it can be better to evict an experienced adversary all at once. This delicate balance of timing with eviction is a strategy we will cover from a defensive perspective. It can also help to have advanced intelligence in these situations; understanding the motives and timeline of your threat can help you determine if you have the luxury of monitoring the threat or you need to respond immediately to a ransomware threat.
The principle of time also relates to the principle of humanity when you start talking about employee schedules. You can likely see the defending teams come online at a certain time of the day and perform the bulk of their analysis during this time. Similarly, the attackers will likely have regular hours of operation, something that has been used to geolocate and attribute hacking groups in the past. APT28, Fancy Bear, Sofacy, or the Dukes as they are otherwise known, were famously attributed in part because of the times at which they operated and compiled their software. In several reports, we can see that compilation times fall specifically within normal business hours in Russian time zones[43]. We can see how the principle of time relates to the principle of economy when we look at the cost of an incident. The longer an incident goes on, the more costly it is for both the attacker and the defender.
The attacker needs to turn operations around in a reasonable time frame, normally a month to several months, to show progress and make a profit. The defender could have costs rack up very quickly if they bring in external consultants, which is done very often. If external consultants are brought into the defender's environment, that will essentially put them on a very expensive and probably short timeline to remediate. Often, if an attacker can outlive this remediation, continuing the incident response effort may be called off due to the resources already spent. Here, we can see the principles of time and the principles of economy at play together.
Other times you probably want things automated. The blinding speed of automation will best any hacker in terms of executing commands on a computer. For example, if you need to kick someone off the same computer you are on, it can help to have some of the processes automated, such as lockout and account deactivation scripts. This relates to both sides, but if you find your team constantly doing the same manual operations, you should consider automating them into a tool. Tool development is an upfront cost in time, but you will reap the benefits in the automation of this technique with execution quality, speed, and accuracy moving forward. This theme of upfront costs on tool development to save on operational speed is something we will revisit throughout this text as well. Dmitry Alperovitch, formally of CrowdStrike, famously spoke of a 1/10/60 time in terms of the time a defensive team should be able to detect, respond to, and contain a threat, respectively. To reach such best-of-class speeds teams will certainly need automated logging, operations, and response capabilities. Dmitry also spoke of a breakout time, or the average time it would take for a threat to move laterally to another machine after compromising their initial computer system[44]. In CrowdStrike's 2019 Global Threat Report, they went on to release several average breakout times for large adversary categories, such as 18 minutes and 29 seconds for bears, or Russian actors, and up to 9 hours and 42 minutes for spiders, or organized criminal groups[45]. On the national CCDC red team, we had an average breakout time of less than two minutes for the 2020 season. I attribute this lightning-fast breakout time to our planning and automation around our chosen strategy or kill chain. This speed bolsters our efficiency in carrying out our goal of persisting early, ubiquitously, and hopefully undetected.