Other best practices to remember are as follows:
Knowing the system is perhaps the most critical factor when attempting to defend it. It makes no difference whether you’re protecting a castle or a Linux server if you don’t understand the intricacies of what you’re defending.
Knowledge of what software is running on your systems is an excellent illustration of this in the area of information security. What daemons do you have running? What kind of exposure do they generate? A decent self-test for someone in a small- to medium-sized environment would be to choose an IP address at random from a list of your systems and see whether you can recall the precise list of ports that are open on the computers.
“It’s a web server, therefore everything’s just running on ports 80, 443, and 22 for remote management; that’s it,” a skilled administrator should be able to say—and so on for each sort of server in the ecosystem. When seeing port scan findings, there should be no surprises.
In this kind of test, you don’t want to hear, “Wow, what’s that port?” Having to ask that question indicates that the system administrator is not entirely aware of everything operating on the computer in question, which is exactly what we want to prevent.
The next crucial principle is that of least privilege. Least privilege simply states that people and objects should be able to do only what they need to do their tasks. I include these kind of examples because administrators frequently configure automatic processes that must be able to perform specific activities, such as backups. What usually occurs is that the administrator adds the user performing the backup to the domain admins group, even if they could get it to function in another way. Why? Because it is less difficult.
Finally, this is a philosophy that is intended to directly contradict human nature, namely laziness. It is always more difficult to grant granular access that allows only specified tasks than it is to grant a higher level of access that covers everything that needs to be done.
This rule of least privilege just reminds us not to succumb to this temptation. Don’t back down. Take the time to make all access as granular and as minimal as feasible.
Defense in depth is likely the least understood of the four concepts. Many people believe that it is as simple as stacking three firewalls instead of one or running two antivirus applications instead of one. This is technically correct, but it is not the fundamental nature of defense in depth.
The true concept is to build various types of protection between an attacker and an asset. These layers don’t even have to be products; they might be applications of other notions, such as least privilege.
Consider an attacker on the internet attempting to breach a web server in the Demilitarized Zone (DMZ; basically a physical or logical subnetwork that contains and exposes an entity’s external-facing services). Given a huge vulnerability, this may be quite simple, but with an infrastructure utilizing defense in depth, it may be substantially more difficult.
We need to take into consideration activities such as hardening (appliances such as routers, firewalls, IPDs/IDSs, and target hosts) and widely implementing antivirus and antimalware—any of these procedures can potentially prevent an attack from being totally or partially successful. The notion is that instead of thinking about what has to be put in place to stop an attack, we should think about what needs to happen for it to be successful. Perhaps an assault had to pass via network infrastructures to get to the host, execute, build an outbound connection to a host outside, download stuff, run it, and so on.
What if any of those steps were to fail? The key to defense in depth is to place barriers at as many sites as possible. Our aim is to try to make it so that it’s hard for potential intruders to get into our network. By using this kind of approach, it will be difficult for hostile code to run on your systems, run your daemons and/or services as the least-privileged user, and so forth.
The advantage is straightforward: you have more chances to prevent an attack from succeeding. Someone may go all the way in, all the way to the box in question, and be stopped by the fact that the malicious code would not run on the host. However, once that code is modified so that it may run, it may be detected by an upgraded IPS or a more stringent firewall ACL. The goal is to secure whatever you can at every stage. Secure everything—file permissions, stack protection, ACLs, host IPSs, limiting admin access, running as limited users; the list is endless.
The core premise is also straightforward: don’t rely on a single solution to protect your assets. Consider each layer of your defense as though it were the only one. When you follow this method, you have a better chance of stopping attacks before they reach their aim.
Also, in IT security (and ISO 27001 itself, which is a framework that is continuously improving), new concepts are arising on almost a yearly basis, and one of the most interesting concepts around is called zero trust.
Conventional security models are based on perimeter security. In practice, the protection of the corporate ecosystem trusts all traffic and action flowing within the perimeter.
The zero trust approach, on the other hand, is designed to address even all those so-called lateral threats that move through networks. How? By exploiting an approach linked to microsegmentation and the definition of granular perimeters, based on users, data, and their location.
- Prevention is preferable, but detection is required.
This is a basic concept, yet it is incredibly significant. The concept is that while it is preferable to stop an attack before it is successful, if it is, it is critical that you are aware that it occurred. For example, you may have safeguards in place to prevent code from being executed on your system, but if code is executed and something is done, it is vital that you are notified and can act immediately.
The difference between learning about a successful attack within 5 or 10 minutes and learning about it weeks later is enormous. Having the knowledge early enough can often result in the attack not being successful at all; for example, the attacker may get on your box and add a user account, but you get to the machine and take it offline before they can do anything with it.
Regardless of the situation, detection is critical because there is no guarantee that your prevention actions will be effective.
Other remarkable best practices are as follows:
- Protection and utility must be balanced.
Computers in a workplace could be entirely safeguarded if all networks were destroyed and everyone was thrown out—but then they would be of no use to anyone.
- Determine your vulnerabilities and make a plan.
Not all of your resources are equally valuable. Some data is more vital than others, such as a database holding all accounting information on your clients, such as bank IDs, social security numbers, addresses, and other personal information (we’ll talk about privacy later). Identifying what data is more sensitive and/or significant will assist you in determining the level of protection required to safeguard it and designing your security tactics accordingly.
- Use uncorrelated defenses.
Using a single strong protection mechanism, such as authentication protocols, is only effective until it is breached. When numerous layers of separate defenses are used, an attacker must apply a variety of tactics to get past them.
Because the causes of breaches aren’t always obvious after the fact, it’s critical to have data to track backward. Even if it doesn’t make sense at first, data from breaches will eventually help to improve the system and prevent future attacks.
Hackers are constantly honing their skills, so information security must evolve to keep up. IT professionals should run tests, conduct risk assessments, reread the disaster recovery plan, double-check the business continuity plan in the event of an attack, and then repeat the process.
IT security is a difficult job that requires both attention to detail and a high level of awareness. However, like many seemingly complex tasks, IT security can be broken down into basic steps that can simplify the process. That’s not to say it’s easy, but it keeps IT professionals on their toes.
So, while we have seen some ways to improve your security posture, I am afraid to tell you that we have only scratched the tip of a huge iceberg. Although hardening is a very important topic and everyone dealing with security should at least understand these basic concepts, an entity is a bit more than a document with a plethora of how-tos.
An entity is the sum of many elements: core values, community, respect and many more, but also ethics, risk management, compliance, and administration, which form the governance of a company.