Troubleshooting is a process that is both rigid and flexible. The rigidity of the troubleshooting process is based on the fact that there are basic steps to be followed. In this way, I like to equate the troubleshooting process to the scientific method, where the scientific method has a specific list of steps that must be followed.
The flexibility of the troubleshooting process is that these steps can be followed in any order that makes sense. Unlike the scientific method, the troubleshooting process often has the goal of resolving the issue quickly. Sometimes, in order to resolve an issue quickly, you might need to skip a step or execute them out of order. For example, with the troubleshooting process, you might need to resolve the immediate issue, and then identify the root cause of that issue.
The following list has five steps that make up the troubleshooting process. Each of these steps could also include several sub-tasks, which may or may not be relevant to the issue. It is important to follow these steps with a grain of salt, as not every issue can be placed into the same bucket. The following steps are meant to be used as a best practice but, as with all things, it should be adapted to the issue at hand:
- Understanding the problem statement.
- Establishing a hypothesis.
- Trial and error.
- Getting help.
- Documentation.
Understanding the problem statement
With the scientific method, the first step is to establish a problem statement, which is another way of saying: to identify and understand the goal of the experiment. With the troubleshooting process, the first step is to understand the problem being reported. The better we understand an issue, the easier it is to resolve the issue.
There are a number of tasks we can perform that will help us understand issues better. This first step is where a Data Collector's personality stands out. Data Collectors, by nature, will gather as much data as they can before moving on to the next step, whereas, the Educated Guessers generally tend to run through this step quickly and then move on to the next step, which can sometimes cause critical pieces of information to be missed.
Adaptors tend to understand which data collecting steps are necessary and which ones are not. This allows them to collect data as a Data Collector would, but without spending time gathering data that does not add value to the issue at hand.
The sub-task in this troubleshooting step is asking the right questions.
Whether via human or automated processes such as a ticket system, the reporter of the issue is often a great source of information.
When they receive a ticket, the Educated Guesser personality will often read the heading of the ticket, make an assumption of the issue and move to the next stage of understanding the issue. The Data Collector personality will generally open the ticket and read the full details of the ticket.
While it depends on the ticketing and monitoring system, in general, there can be useful information within a ticket. Unless the issue is a common issue and you are able to understand all that you know from the header, it is generally a good idea to read the ticket description. Even small amounts of information might help with particularly tricky issues.
Gathering additional information from humans, however, can be inconsistent. This varies greatly depending on the environment being supported. In some environments, the person reporting an issue can provide all of the details required to resolve the issue. In other environments, they might not understand the issue and simply explain the symptoms.
No matter what troubleshooting style fits your personality best, being able to get important information from the person reporting the issue is an important skill. Intuitive problem solvers such as the Educated Guesser or Adaptor tend to find this process easier as compared to Data Collector personalities, not because these personalities are necessarily better at obtaining details from people but rather because they are able to identify patterns with less information. Data Collectors, however, can get the information they need from those reporting the issue if they are prepared to ask troubleshooting questions.
Note
Don't be afraid to ask obvious questions
My first technical job was in a webhosting technical support call center. There I often received calls from users who did not want to perform the basic troubleshooting steps and simply wanted the issue escalated. These users simply felt that they had performed all of the troubleshooting steps themselves and had found an issue beyond first level support.
While sometimes this was true, more often, the issue was something basic that they had overlooked. In that role, I quickly learned that even if the user is reluctant to answer basic or obvious questions, at the end of the day, they simply want their issue resolved. If that meant going through repetitive steps, that was ok, as long as the issue is resolved.
Even today, as I am now the escalation point for senior engineers, I find that many times engineers (even with years of troubleshooting experience under their belt) overlook simple basic steps.
Asking simple questions that might seem basic are sometimes a great time saver; so don't be afraid to ask them.
Attempting to duplicate the issue
One of the best ways to gather information and understand an issue is to experience it. When an issue is reported, it is best to duplicate the issue.
While users can be a source of a lot of information, they are not always the most reliable; oftentimes a user might experience an error and overlook it or simply forget to relay the error when reporting the issue.
Often, one of the first questions I will ask a user is how to recreate the issue. If the user is able to provide this information, I will be able to see any errors and often identify the resolution of the issue faster.
Note
Sometimes duplicating the issue is not possible
While it is always best to duplicate the issue, it is not always possible. Every day, I work with many teams; sometimes, those teams are within the company but many times they are external vendors. Every so often during a critical issue, I will see someone make a blanket statement such as "If we can't duplicate it, we cannot troubleshoot it."
While it is true that duplicating an issue is sometimes the only way to find the root cause, I often hear this statement abused. Duplicating an issue should be viewed like a tool; it is simply one of many tools in your troubleshooting tool belt. If it is not available, then you simply have to make do with another tool.
There is a significant difference between not being able to find a resolution and not attempting to find a resolution due to the inability to duplicate an issue. The latter is not only unhelpful, but also unprofessional.
Running investigatory commands
Most likely, you are reading this book to learn techniques and commands to troubleshoot Red Hat Enterprise Linux systems. The third sub-task in understanding the problem statement is just that—running investigative commands to identify the cause of the issue. Before executing investigatory commands, however, it is important to know that the previous steps are in a logical order.
It is a best practice to first ask the user reporting an issue some basic details of the issue, then after obtaining enough information, duplicate the issue. Once the issue has been duplicated, the next logical step is to run the necessary commands to troubleshoot and investigate the cause of the issue.
It is very common to find yourself returning to previous steps during the troubleshooting process. After you have identified some key errors, you might find that you must ask the original reporter for additional information. When troubleshooting, do not be afraid to take a few steps backwards in order to gain clarity of the issue at hand.
Establishing a hypothesis
With the scientific method, once a problem statement has been formulated it is then time to establish a hypothesis. With the troubleshooting process, after you have identified the issue, gathered the information about the issue such as errors, system current state, and so on, it is also time to establish what you believe caused or is causing the issue.
Some issues, however, might not require much of a hypothesis. It is common that errors in log files or the systems current state might answer why the issue occurred. In such scenarios, you can simply resolve the issue and move on to the Documentation step.
For issues that are not cut and dry, you will need to put together a hypothesis of the root cause. This is necessary as the next step after forming a hypothesis is attempting to resolve the issue. It is difficult to resolve an issue if you do not have at least, a theory of the root cause.
Here are a few techniques that can be used to help form a hypothesis.
Putting together patterns
While performing data collection during the previous steps, you might start to see patterns. Patterns can be something as simple as similar log entries across multiple services, the type of failure that occurred (such as, multiple services going offline), or even a reoccurring spike in system resource utilization.
These patterns can be used to formulate a theory of the issue. To drive the point home, let's go through a real-world scenario.
You are managing a server that both runs a web application and receives e-mails. You have a monitoring system that detected an error with the web service and created a ticket. While investigating the ticket, you also receive a call from an e-mail user stating they are getting e-mail bounce backs.
When you ask the user to read the error to you they mention No space left on device
.
Let's break down this scenario:
- A ticket from our monitoring solution has told us Apache is down
- We have also received reports from e-mail users with errors indicative of a file system being full
Could all of this mean that Apache is down because the file system is full? Possibly. Should we investigate it? Absolutely!
Is this something that I've encountered before?
The above breakdown leads into the next technique for forming a hypothesis. It might sound simple but is often forgotten. "Have I seen something like this before?"
With the previous scenario, the error reported from the e-mail bounce back was one that generally indicated that a file system was full. How do we know this? Well, simple, we have seen it before. Maybe we have seen that same error with e-mail bounce backs or maybe we have seen the error from other services. The point is, the error is familiar and the error generally means one thing.
Remembering common errors can be extremely useful for the intuitive types such as the Educated Guesser and Adaptor; this is something they tend to naturally perform. For the Data Collector, a handy trick would be to keep a reference table of common errors handy.
Tip
From my experience, most Data Collectors tend to keep a set of notes that contain things such as common commands or steps for procedures. Adding common errors and the meaning behind those errors are a great way for systematic thinkers such as Data Collectors to establish a hypothesis faster.
Overall, establishing a hypothesis is important for all types of troubleshooters. This is the area where the intuitive thinkers such as Educated Guessers and Adaptors excel. Generally, those types of troubleshooters will form a hypothesis sooner, even if sometimes those hypotheses are not always correct.
In the scientific method, once a hypothesis is formed, the next stage is experimentation. With troubleshooting, this equates to attempting to resolve the issue.
Some issues are simple and can be resolved using a standard procedure or steps from experience. Other issues, however, are not as simple. Sometimes, the hypothesis turns out to be wrong or the issue ends up being more complicated than initially thought.
In such cases, it might take multiple attempts to resolve the issue. I personally like to think of this as similar to trial and error. In general, you might have an idea of what is wrong (the hypothesis) and an idea on how to resolve it. You attempt to resolve it (trial), and if that doesn't work (error), you move on to the next possible solution.
Start by creating a backup
To those taking up a new role as a Linux Systems Administrator, if there were only one piece of advice I could give, it would be one that most have learned the hard way: back everything up before making changes.
Many times as systems administrators we find ourselves needing to change a configuration file or delete a few unneeded files in order to solve an issue. Unfortunately, we might think we know what needs to be removed or changed but are not always correct.
If a backup was taken, then the change can simply be restored to its previous state, however, without a backup. Thus reverting changes is not as easy.
A backup can consist of many things, it can be a full system backup using something like rdiff-backup
, a VM snapshot, or something as simple as creating a copy of a file.
Tip
For those interested in seeing the extent of this tip in practice, simply run the following command on any server that has more than four systems administrators and has been around for several years:
In many cases at this point the issue is resolved, but much like each step in the troubleshooting process, it depends on the issue at hand. While getting help is not exactly a troubleshooting step, it is often the next logical step if you cannot solve the issue on your own.
When looking for help, there are generally six resources available:
- Books
- Team Wikis or Runbooks
- Google
- Man pages
- Red Hat kernel docs
- People
Books (such as this one) are good for referencing commands or troubleshooting steps for particular types of issues. Other books such as the ones that specialize on a specific technology are good for referencing how that technology works. In previous years, it was not uncommon to see a senior admin with a bookshelf full of technical books at his or her disposal.
In today's world, as books are more frequently seen in a digital format, they are even easier to use as references. The digital format makes them searchable and allows readers to find specific sections faster than a traditional printed version.
Before Team Wikis became common, many operations groups had physical books called Runbooks. These books are a list of processes and procedures used daily by the operations team to keep the production environments operating normally. Sometimes, these Runbooks would contain information for provisioning new servers and sometimes they would be dedicated to troubleshooting.
In today's world, these Runbooks have mostly been replaced by Team Wikis, these Wikis will often have the same content but are online. They also tend to be searchable and easier to keep up to date, which means they are frequently more relevant than a traditional printed Runbook.
The benefit of Team Wikis and Runbooks are that not only can they often address issues that are specific to your environment, but they can also resolve those issues. There are many ways to configure services such as Apache, and there are even more ways that external systems create dependencies on these services.
In some environments, you might be able to simply restart Apache whenever there is an issue, but in others, you might actually have to go through several prerequisite steps. If there is a specific process that needs to be followed before restarting a service, it is a best practice to document the process in either a Team Wiki or Runbook.
Google is such a common tool for systems administrators that at one point there were specific search portals available at google.com/linux
, google.com/microsoft
, google.com/mac
, and google.com/bsd
.
Google has depreciated these search portals but that doesn't mean that the number of times systems administrators use Google or any other search engine for troubleshooting has decreased.
In fact, in today's world, it is not uncommon to hear the words "I would Google it" in technical interviews.
A few tips for those new to using Google for systems administration tasks are:
- If you copy and paste a full error message (removing the server specific text) you will likely find more relevant results:
For example, searching for kdumpctl: No memory reserved for crash kernel returns 600 results, whereas searching for memory reserved for crash kernel returns 449,000 results.
- You can find an online version of any man page by searching for
man
then a command such as man netstat
. - You can wrap an error in double quotes to refine search results to those that contain the same error.
- Asking what you're looking for in the form of a question usually results in tutorials. For example, How do you restart Apache on RHEL 7?
While Google can be a great resource, the results should always be taken with a grain of salt. Often while searching for an error on Google, you might find a suggested command that offers little explanation but simply says "run this and it will fix it". Be very cautious when running these commands, it is important that any command you execute on a system should be a command you are familiar with. You should always know what a command does before executing it.
When Google is not available or even sometimes when it is, the best source of information on commands or Linux, in general, are the man pages. The man pages are core Linux manual documents that are accessible via the man
command.
To look up documentation for the netstat
command, for example, simply run the following:
As you can see, this command outputs not only the information on what the netstat
command is, but also contains a quick synopsis of usage information such as the following:
Also, it gives detailed descriptions of each flag and what it does:
In general, the base manual pages for the core system and libraries are distributed with the man-pages
package. The man pages for specific commands such as top
, netstat
, or ps
are distributed as part of that command's installation package. The reason for this is because the documentation of individual commands and components is left to the package maintainers.
This can mean that some commands are not documented to the level of others. In general, however, the man pages are extremely useful sources of information and can answer most day-to-day questions.
In the previous example, we can see that the man page for netstat
includes a few sections of information. In general, man pages have a consistent layout with some common sections that can be found within most man pages. The following is a simple list of some of these common sections:
- Name
- Synopsis
- Description
- Examples
The Name section generally contains the name of the command and a very brief description of the command. The following is the name section from the ps
command's man page:
The Synopsis section of a command's man page will generally list the command followed by the possible command flags or options. A very good example of this section can be seen in the netstat
command's synopsis:
This section can be very useful as a quick reference for command syntax.
The Description section will often contain a longer description of the command as well as a list and explanation of the various command options. The following snippet is from the cat
command's man page:
The description section is very useful, since it goes beyond simply looking up options. This section is often where you will find documentation about the nuances of commands.
Often man pages will also include examples of using the command:
The preceding is a snippet from the cat
command's man page. We can see, in this example, how to use cat
to read from files and standard input in one command.
This section is often where I find new ways of using commands that I've used many times before.
In addition to the previous section, you might also see sections such as See Also, Files, Author, and History. These sections can also contain useful information; however, not every man page will have them.
Along with man pages, Linux systems generally also contain info documentation, which are designed to contain additional documentation, which go beyond that, within man pages. Much like man pages, the info documentation is included with a command package, and the quality/quantity of the documentation can vary by package.
The method to invoke the info documentation is similar to man pages, simply execute the info
command followed by the subject you wish to view:
Referencing more than commands
In addition to using man pages and info documentation to look up commands; these tools can also be used to view documentation around other items such as system calls or configuration files.
As an example, if you were to use man
to search for the term signal
, you would see the following:
Signal
is a very important system call and a core concept of Linux. Knowing that it is possible to use the man
and info
commands to look up core Linux concepts and behaviors can be very useful during troubleshooting.
Red Hat Enterprise Linux based distributions generally include the man-pages
package; if your system does not have the man-pages
package installed, you can install it with the yum
command:
In addition to man pages, the Red Hat distribution also has a package called kernel-doc. This package contains quite a bit of information on how the internals of the system works.
The kernel documentation is a set of text files that are placed into /usr/share/doc/kernel-doc-<kernel-version>/
and are categorized by the topic they cover. This resource is quite useful for deeper troubleshooting such as adjusting kernel tunables or understanding how ext4
filesystems utilize the journal.
By default, the kernel-doc
package is not installed, however, it can be easily installed using the yum
command:
Whether it is a friend or a team leader, there is certain etiquette when asking others for help. The following is a list of things that people tend to expect when asked to help solve an issue. When I am asked for help, I would expect you to:
- Try to resolve it yourself: When escalating an issue, it is always best to at least try to follow the Understanding the problem statement and Forming a hypothesis steps of the troubleshooting process.
- Document what you've tried: Documentation is key to escalating issues or getting help. The better you document the steps tried and errors found, the faster it will be for others to identify and resolve the issue.
- Explain what you think the issue is and what was reported: When you escalate the issue, one of the first things to point out is your hypothesis. Often this can help expedite resolution by leading the next person to a possible solution without having to perform data collection activities.
- Mention whether there is anything else that happened to this system recently: Often issues come in pairs, it is important to highlight all factors of what is happening on the system or systems affected.
The preceding list, while not extensive, is important as each of these key pieces of information can help the next person troubleshoot the issue effectively.
When escalating issues, it is always best to follow up with that other person to find out what they did and how they did it. This is important as it will show the person you asked that you are willing to learn more, which many times will lead to them taking time to explain how they resolved and identified the issue.
Interactions like these will give you more knowledge and help build your system's administration skills and experience.
Documentation is a critical step in the troubleshooting process. At every step during the process, it is key to take note and document the actions being performed. Why is it important to document? Three reasons mainly:
- When escalating the issue, the more information you have written down the more you can pass on to another
- If the issue is a reoccurring issue, the documentation can be used to update a Team Wiki or Runbook
- If, in your environment, you perform Root Cause Analysis (RCA), all of this information will be required for a RCA
Depending on environments, the documentation can be anything from simple notes saved in a text file on a local system to required notes for a ticket system. Each work environment is different but a general rule is there is no such thing as too much documentation.
For Data Collectors, this step is fairly natural. As most Data Collector personalities will generally keep quite a few notes for their own personal use. For Educated Guessers, this step might seem unnecessary. However, for any issue that is reoccurring or needs to be escalated, documentation is critical.
What kind of information should be documented? The following list is a good starting point but as with most things in troubleshooting, it depends on the environment and the issue:
- The problem statement, as you understand it
- The hypothesis of what is causing the issue
- Data collected during the information gathering steps:
- Specific errors found
- Relevant system metrics (for example, CPU, Memory, and Disk utilization)
- Commands executed during the information gathering steps (within reason, it is not required to include every
cd
or ls
command executed) - Steps taken during attempts to resolve the issue, including specific commands executed
With the preceding items well documented, if the issue reoccurs, it is relatively simple to take the documentation and move it to a Team Wiki. The benefit to this is that a Wiki article can be used by other team members who need to resolve the same issue during reoccurrences.
One of the three reasons listed previously for documentation is to use the documentation during Root Cause Analysis, which leads to our next topic—Establishing a Root Cause Analysis.