Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Troubleshooting CentOS
Troubleshooting CentOS

Troubleshooting CentOS: A practical guide to troubleshooting the CentOS 7 community-based enterprise server

Arrow left icon
Profile Icon Jonathan Hobson
Arrow right icon
R$50 per month
Full star icon Full star icon Full star icon Full star icon Empty star icon 4 (3 Ratings)
Paperback Jun 2015 190 pages 1st Edition
eBook
R$49.99 R$147.99
Paperback
R$183.99
Subscription
Free Trial
Renews at R$50p/m
Arrow left icon
Profile Icon Jonathan Hobson
Arrow right icon
R$50 per month
Full star icon Full star icon Full star icon Full star icon Empty star icon 4 (3 Ratings)
Paperback Jun 2015 190 pages 1st Edition
eBook
R$49.99 R$147.99
Paperback
R$183.99
Subscription
Free Trial
Renews at R$50p/m
eBook
R$49.99 R$147.99
Paperback
R$183.99
Subscription
Free Trial
Renews at R$50p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Troubleshooting CentOS

Chapter 1. Basics of Troubleshooting CentOS

CentOS, the Community Enterprise Operating System, is known to be a robust, stable, and trouble-free platform that is particularly well suited to the role of a server. Used by organizations of all sizes, CentOS can be found in many mission-critical environments the world over. However, as servers are expected to work on demand and without interruption, there will be times when a calm but firm hand is required to restore a service or to make some final adjustments to an existing application in order to ensure that a "working state" can be resumed as quickly as possible:

"The server has gone down and all hell is about to break loose."

In a less than perfect world, things can (and inevitably do) go wrong, but it is your overall understanding of CentOS 7 and the confidence it provides that will form the basis of your troubleshooting skills. Remember, troubleshooting is a process of investigation that ultimately leads to a diagnosis. All systems are different and every approach to the same situation can vary depending on the purpose of that system. So, with this in mind, it is important to realize that the premise of this book is not recipe-driven, but more about the tools that are used and the resources you will be expected to encounter and interact with.

In this chapter, we will:

  • Learn how to install some basic tools on CentOS
  • Discover how to gather hardware-based system information using lscpu and lspci
  • Learn more about the importance of dmesg and how it interacts with the kernel
  • Learn about the more common log files and how they affect the log output
  • Learn how to manipulate files of any description using grep, tail, cat, less, truncate, and many more command-line functions

Installing some basic tools

During the course of this book, it is assumed that you will already have access to the basic tools associated with troubleshooting your server. Some of the more obscure tools will be mentioned and instructions will be given; however, for those who may or may not have access to the basic toolbox, as the root user you may want to begin by running the following command:

# yum groupinstall "Base" "Development Libraries" "Development Tools"

This action, if and when confirmed, will begin to download and install the common development tools, libraries, and base components of a CentOS server system. It also contains the relevant utilities required by RPM, additional text editors, and packages required to compile custom packages.

Note

The practice of installing these packages at the outset is optional and all of these packages can be installed individually (as and when required). However, in an environment where disaster recovery planning has a vital role to play, it is worth considering the notion that a server has everything in place before any issues arise.

So, having prepared the system with the necessary tools and utilities, we shall begin in earnest by taking a closer look at the hardware. To do this, it is recommended that you continue with root access to the system in question.

Gathering hardware information

As a matter of principle, most people will tend to suggest that all system information can be categorized as either hardware-or-software based. This approach certainly serves to simplify things, but throughout the course of this chapter I will go some way to infer that there are instances in which the interplay of both (hardware and software) can be the reason for the issues at hand.

So, before you begin troubleshooting a system, always consider that the need gathering information about a system is the recommended approach to gaining additional insight and familiarity. Look at it this way: the practice of gathering hardware information is not necessarily required, but an investigation of this type may assist you in the search for an eventual diagnosis.

To begin, we will start by running a simple CPU-based hardware report with the following command:

# cat /proc/cpuinfo

As you will see, the purpose of this command is to output all information related to the CPU model, family, architecture, the cache, and much more. The /proc approach is always a good tradition, but using the following command is generally considered to be a better practice and far easier to use:

# lscpu

This command will query the system and output all relevant information associated with the CPU in the following manner:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
...

On the other hand, rather than querying absolutely everything, you can specify criteria by using grep (a subject that we will return to a little later in this chapter) in order to obtain any pertinent information, like this:

# lscpu | grep op-mode

So, having done this and recorded the results for future reference, we will now continue our investigation by running a simple hardware report with the lspci command in the following way:

# lspci

The result of this command may output something similar to the following information:

00:00.0 Host bridge: Intel Corporation 82P965/G965 Memory Controller Hub (rev 02)
00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 02)
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
00:0a.0 PCI bridge: Digital Equipment Corporation DECchip 21150
00:0e.0 RAM memory: Red Hat, Inc Virtio memory balloon
00:1d.0 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #1 (rev 02)
00:1d.7 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev f2)
00:1f.0 ISA bridge: Intel Corporation 82801HB/HR (ICH8/R) LPC Interface Controller (rev 02)

The lspci command provides all the relevant information concerning the PCI devices of your server, which in turn, can be expanded by employing either the -v option or the alternative -vv / -vvv option(s), depending on the level of detail you require:

# lspci -v
# lspci -vv
# lspci -vvv

By default, the above commands will provide all the information required by you to confirm whether a device is supported by any of the modules currently installed on your system or not. It is expected that you should only need to do this when hardware upgrades have been implemented, when the system has just been installed, or if you are attempting to familiarize yourself with a new environment. However, in order to simplify this exercise even further, you will be glad to know that a "tree view mode" is also available. The purpose of this facility is to output the associated device ID and show how these values are associated with the relevant bus.

To do this, type the following command:

# lspci -t

As a troubleshooter, you will be aware that every device must maintain a unique identifier as CentOS, like all other operating systems, will use that identifier to bind a driver to that device. The lspci command works by scanning the /sys tree for all connected devices, which can also include the connection port, the device type, and class, to name but a few. Having done this, the lspci command will then consult /usr/share/hwdata/pci.ids to provide the human-readable entries it displays.

For example, you can display the kernel drivers/modules by typing the following lspci command with the -k option like this:

# lspci -k

Naturally, during any hardware-based troubleshooting investigation you will want to review the system logs for additional clues, but as we have seen, both the lscpu and lspci commands are particularly useful when attempting to discover more about the necessary hardware information present on your system.

You can learn more about these commands by reviewing the respective on-board manuals at any time:

$ man lscpu
$ man lspci

Meanwhile, if you want to practice more, a simple test would be to insert a USB thumb drive and to analyze the findings yourself by paying close attention to the enumeration found within /var/log/messages.

Note

Remember, if you do try this, you are looking at how the system reacted once the USB drive was inserted; you are not necessarily looking at the USB drive itself; the information about which can be obtained with lsusb.

On the other hand, in the same way that we can use grep with lscpu, if you are already feeling comfortable with this type of investigation, then you may like to know that you can also use grep with the lspci command to discover more about your RAID controller in the following way:

# lspci | grep -i raid

Now, I am sure you will not be surprised to learn that there are many more commands associated with obtaining hardware information. This includes (but is not limited to) lsmod, dmidecode hdparm, df -h, or even lsblk and the many others that will be mentioned throughout the course of this book. All of them are useful, but for those who do not want to commit them to memory, a significant amount of information can be found by simply reading the files found within the /proc and /sys directories like this:

# find /proc | less
# find /sys | less

Consequently, and before we move on, you should now be aware that when you are dealing with hardware analysis, perfecting your skills is about practice and exposure to a server over the long term. My reason for stating this is based on the notion that a simple installation procedure can serve to identify these problems almost immediately, but without that luxury, and as time goes by, it is possible that the hardware will need replacing or servicing. RAID Battery packs will fail, memory modules will fail, and, on some occasions, it could be that a particular driver has not fully loaded during the most recent reboot. In this situation, you may find that the kernel is flooding the system with random messages to such an extent that it suggests an entirely different issue is causing the problem. So yes, hardware troubleshooting requires a good measure of patience and observation, and it is for this reason that a quick review of both the lscpu and lspci commands has formed our introduction to troubleshooting CentOS 7.

Understanding dmesg

Before we dive into the subject of log files, I would like to begin by spending a few moments to discuss the importance of the dmesg command.

The dmesg command is used to record messages from the kernel that are specifically related to the process of hardware detection and configuration. I will not go in too much technical detail at this point, but it is important to realize that these messages are derived from the kernel ring buffer; a condition that can not only prove to be of great assistance because it relates back to the subject of hardware troubleshooting, but one that provides evidence as to why an understanding of the system hardware can reflect in a possible software diagnosis and vice versa.

The dmesg file is located in the /var/log/ directory, but unlike other files that reside in that directory, the basic syntax to view the contents of the dmesg file is as follows:

# dmesg | less

You can page through the results in the usual way, but if you would like to make the timestamp a little easier to read, you may want to invoke the -T option like this:

# dmesg -T | less

These commands will now provide us with information related to all the hardware drivers loaded into the kernel during the boot sequence. This information will include their status (success or failure), and if a failure is recorded, it will even provide an error message describing why a failure took place. However, as this file can be quite overwhelming, you should use grep to query dmesg in order to streamline this information and simplify the output.

To do this, simply customize the following syntax to suit your needs:

# dmesg -T | grep -i memory

This command will now display all relevant information regarding the total memory available and shared memory details associated with the server. Of course, similar approaches can be made to read the specific information for USB devices, direct memory access (DMA), or even tty.

For example, you can query dmesg to display hardware information related to any Ethernet ports in the following way:

# dmesg –T | grep -i eth0

Depending on your system configuration, the output will look similar to this:

[Sun Apr 19 04:56:57 2015] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready

To extend this approach, you can then modify the previous command in order to discover whether the kernel has detected a specific hard disk. To do this, type:

# dmesg –T | grep sda

Alternatively, you can then use the -i option to ignore the effects of case sensitivity when searching for tty references:

# dmesg | grep -i tty

As you will see, the output of the dmesg file is verbose and the information contained within it can be used to troubleshoot almost anything from network cards to storage issues. The demsg file may not give you the answer you are looking for straightaway, but it does provide you with another piece of the puzzle when it is used in combination with the information found in some of the more common log files associated with the CentOS operating system.

Understanding log files

By default, all CentOS system log files can be found in /var/log and a full inventory on your current server can be obtained by typing the following command:

# find /var/log

With that said, every system is different, and for overall simplicity, you will find that some of the more common log files (associated with a minimal installation of CentOS 7) will include:

  • /var/log/messages: This file contains information related to the many native services used by CentOS. This includes (but is not limited to) the kernel logger, the network manager, boot process, mail services, cron jobs, and many other services that do not have their own log files. In many respects, this record can be considered to be a global log file of sorts, and out of habit, it will probably become your first port of call in any troubleshooting process.
  • /var/log/boot.log: This file contains information that is reported when the system boots.
  • /var/log/maillog: This file contains information that is reported by the default mail server used by the system.
  • /var/log/secure: This file contains information that is related to the associated authentication and authorization privileges.
  • /var/log/wtmp: This file contains information related to user login records.
  • /var/log/btmp: This file contains information related to failed login attempts.
  • /var/log/cron: This file contains information related to cron (and anacron).
  • /var/log/lastlog: This file contains information related to the binary log that contains all of the last login information.
  • /var/log/yum.log: This file contains information related to Yum and reports any activity related to the server's package management tools.

Now, before we continue, I want to draw your attention towards the importance of these files as it is often a good idea to store /var/log in a separate partition to / (root).

A perfect system would maintain a separate partition for /tmp, /usr, and others, but yes, there may be situations where storing your log files on the same partition as / (root) is unavoidable. So remember, if and when the opportunity does arise, you may want to consider storing these directories on a separate filesystem and a separate physical volume (if possible), as this is considered to be good practice with regard to maintaining the overall security, integrity, and performance of the system in question.

However, and having said that, it is also important to recognize that many other packages will create and store logs in other locations. You may even be required to specify these locations yourself, and for this reason, it should be remembered that not all logs are located in /var/log.

For example, if the server in question is hosting one or more websites and storing all the relevant Apache VirtualHost information in a specific /home directory, then the associated log files may be found in a location like this:

/path/to/virtualhost/domain1/log/access_log
/path/to/virtualhost/domain1/log/error_log

The same can be said of many other packages, and this issue arises because the packages may not have the required privileges to write to that directory, while others are designed to maintain all logging activity within their own installation directory. Therefore, and depending on the nature of your system, you may need to spend a few moments analyzing your server's installation structure in order to locate the appropriate log file(s).

Reading log files and affecting the output

Viewing or reading a log file is very easy and depending on your personal preferences, the basic syntax to view any of these files can be expressed in any of the following formats:

# less /var/log/filename
# more /var/log/filename
# cat /var/log/filename
# cat /var/log/filename | less
# cat /var/log/filename | more

Note

Remember, depending on the system configuration, you may need root privileges to view a specific log file. The same can be said when you are attempting to make changes to any system files, and for this reason, we will continue as the root user. However, those who use sudo or su (switch user) should change the instructions accordingly.

Log files can vary between applications and services, but the general purpose of these files is to record the time and date of an event and the security level, and to provide a message or general description. Most messages will be general notices or warnings of one type or another, but on certain occasions, errors will also be trapped.

For example, you may see something like this:

Dec  4 12:49:05 localhost postfix/postfix-script[1909]: starting the Postfix mail system

Messages like this are quite ordinary and merely explain what is happening and when it happened. Yes, you can safely ignore them, but due to the number of messages you see, some may remark or feel that the system is acting a little oversensitive to the extent that a log file is being flooded with low-level information. This information may serve no real purpose to many, but in some circumstances, you may consider that the information supplied isn't sensitive enough, and more information is needed. In the end, only you can decide what best suits your needs. So, in order to take a case in point, let's increase log sensitivity for the purpose of troubleshooting the system.

To do this, we will begin by running the following command:

# cat /proc/sys/kernel/printk

The output of the preceding command enables you to view the current settings for the kernel, which, on a typical system, will look like this:

4       4       1       7

There is a relationship at work here, and it is important that you understand that printk maintains four numeric values that control a number of settings related to the logging of error messages, while every error message in turn maintains its very own log level in order to define the importance of that message.

The log level values can be summarized in the following way:

  • 0: Kernel emergency
  • 1: Kernel alert; action must be taken immediately
  • 2: Condition of the kernel is considered critical
  • 3: General kernel, error condition
  • 4: General kernel, warning condition
  • 5: Kernel notice of a normal but significant condition
  • 6: Kernel informational message
  • 7: Kernel debug-level messages

So, based on the above information, the log level values of 4, 4, 1, and 7 tell us that the following is now apparent:

  • The first value (4) is called the console log level. This numeric value defines the lowest priority of any message printed to the console, thereby implying that the lower the priority, the higher the log level number.
  • The second value (4) determines the default log level for all messages that do not maintain an exclusive log level.
  • The third value (1) determines the lowest possible log level configuration for the overall console log level. The lower the priority, the higher the log level number.
  • The fourth and final value (7) determines the default value for the overall console log level. Again, the lower the priority, the higher the log level number.

Consequently, you are now in a position to consider making changes to the log level through a configuration file found at /etc/sysctl.conf. This file enables you to make fine adjustments to default settings, and it can be accessed with your favorite text editor in the following manner:

# nano /etc/sysctl.conf

To make the required change use the following syntax:

kernel.printk = X X X X

Here, the actual value of X is a log level setting taken from the options described earlier.

For example, you can change the number of messages by adding the following line:

kernel.printk = 5 4 1 7

Of course, such a modification implies a change to the kernel, and for this reason a reboot would be warranted. So, having done this, you will find that the output of running cat /proc/sys/kernel/printk should now reflect the new values. However, and as a supplementary note of caution, having considered doing this (and yes, you can easily reverse any changes made), it is important to realize that there are many questions based on the validity of changing these settings. Look at it this way: it may not help you at all, so you should always read around the subject before making these changes in order to confirm that making this alteration will suit your general purposes.

To view the onboard manual, simply use the following command:

$ man sysctl

On the other hand, for the many other services and applications on your server, you will have additional avenues of investigation to consider and these are generally set by the service or application in question.

A common example of this is Apache. So, if you are debugging a web-based issue related to this service, you may be inclined to open the httpd configuration file like this:

# nano /etc/httpd/conf/httpd.conf

Look or search for the following line:

LogLevel warn

Then, replace the instruction with a more appropriate setting (before saving the file and restarting the service). In this case, you can use:

LogLevel debug

Fortunately, it is nice to know that most services and applications do support a form of debugging mode for an improved log output. This will make the log file much more descriptive and easier to work with when troubleshooting the server, but just before we leave this subject, here comes the small print…

When you are working with log files, you should be aware that the information contained within those log files will not always be enough to help you diagnose the issue at hand or discover the cause of a problem. Log files may not only lack the required information, but they can also contain unknown errors and misleading messages. After all, log files only contain a series of (mainly) predefined messages or break points in a package, and these messages have been designed by programmers to make a remark concerning a known event that could have, or has taken place.

Note

Remember…

When affecting the output of a log file, a verbose and detailed output may raise performance or security issues, while detailed logging can also place an undue burden on the CPU or disk I/O operations.

Based on these circumstances, there are no hard and fast rules because we also know that log files have limitations. So, in the end you will rely on a keen eye for detail and a great deal of patience, and for these reasons alone, you must always learn to "listen to the server" as a whole.

Let's put it this way: the answer is there, but it may not be in the log files. Perseverance and a calm (but firm) hand will win the day, and it is this point of contention that will be echoed throughout the pages of this book.

Using tail to monitor log files

So, armed with the previous information and knowing that log files tend to describe events by specifying the time of occurrence, a level of severity, and a preordained message, the key to success in any troubleshooting scenario is based on an ability to work with these records and manipulate them in such a way that they provide us with the information we require to get the job done.

For the purpose of troubleshooting, one of the most useful commands you will use is known as tail. A command-line expression that can be used to read the last lines of a log file is as follows:

# tail -n 100 /var/log/maillog

Similarly, tail can also be used to obtain the most recently added lines like this:

# tail -f /var/log/maillog

Using this command not only gives you the most recent view of the log file in question, but also ensures that all updates are displayed immediately, which provides an instant way to read log files in a live environment. This approach can be described as the perfect way to troubleshoot Apache, Postfix, Nginx, MySQL, and the many other applications or services your server may be using.

For example, you can view the Apache access_log like this:

# tail -f /var/log/httpd/access_log

To take this feature one step further, let's assume that you wanted to get the last 3,000 lines from a log file knowing that it will not fit within your shell window. To account for this requirement, you can pipe the results with the less command like this:

# tail -n 3000 /var/log/messages | less

In this situation, you can now page the results as required, but having used this technique a few times, I think you would agree that this is far more flexible than using the generic cat command; unless of course, you wanted to do something very specific.

Using cat, less, and more

The cat command has been with us for a long time and, returning to our previous discussion relating to hardware and the contents of the /proc directory, you can use the cat command to view detailed information about your server's CPU:

# cat /proc/cpuinfo

If you wish to know more about the server's memory, you can use:

# cat /proc/meminfo

Then, there is always the chance to learn more about your devices by typing:

# cat /proc/devices

As useful as cat is, it is also known for providing a dump of the entire content on the screen, a condition that can seem a little unwieldy if the file is greater than 1,000 lines long. So, in these circumstances, the other option is to use the less and more commands in order to page through specific (static) files in the following way:

# less /var/log/messages
# more /var/log/messages

However, because more is relatively old, most will argue that less is far superior. The less command is similar to more, but less will allow you to navigate back and forth between paged results. So yes, it's an old joke, but from now on, and wherever possible, always know that less really does mean more.

For example, less allows you to search for a particular string. To do this, simply open the following file using less like this:

# less /var/log/messages

Now, in the lower left portion of the screen, type /, followed by a string value like this:

/error

The output will now be adjusted to highlight the search results, and if you are looking for a larger selection of options, simply hit the H key while less is open.

Using grep

Now let's consider the need to search the server's log files for specific keywords.

In this situation, you would use the command known as grep, which also becomes a very helpful technique to learn when you would like to perform an advanced string-based search of almost any file on your server.

Let's say you wanted to search for a specific e-mail address in the mail log file. To do this, you would use grep in the following way:

# grep "user@domain.tld" /var/log/maillog

Taking this one step further, grep can also be used to search in a recursive pattern across one or more files at the same time.

For example, in order to search the log file directory for an IP address (XXX.XXX.XXX.XXX), you would use the grep command in combination with the -R option like this:

# grep -R "XXX.XXX.XXX.XXX" /var/log/

Similarly, you can add line numbers to the output with the -n option like this:

# grep -Rn "XXX.XXX.XXX.XXX" /var/log/

Moreover, you will also notice that, during a multi-file based search, the filename is made available for each search result, but by employing the -h option, this can be disabled in the following way:

# grep -Rh "XXX.XXX.XXX.XXX" /var/log/

You can ignore case with the -i option in the following way:

# grep -Ri "XXX.XXX.XXX.XXX" /var/log/

Moving beyond this, grep can be used to sort the content of a search result by simply calling the sort command. An alphabetical sort order (a to z) can be achieved by simply adding sort at the end of your original command like this:

# grep -R "XXX.XXX.XXX.XXX" /var/log/ | sort 

A reverse alphabetical sort order (z to a) can be achieved by adding the -r option like this:

# grep -R "XXX.XXX.XXX.XXX" /var/log/ | sort -

And finally, if you wish to search for more than one value, you can invoke the –E argument like this (but do not include unnecessary white spaces between the pipes):

# grep -E "term 1|term 2|term 3" /var/log/messages

Of course, grep can do so much more, but for the purpose of troubleshooting, I would now like to draw your attention to one final, but very useful command. Known as diff, this command can be very useful in determining the differences between two files.

Using diff

The diff command is not necessarily considered to be a tool that is associated with log files unless you are comparing backups for a specific purpose. However, the diff command is very useful when comparing changes across an application.

For example, diff will enable you to compare the differences between two Apache configuration files, but by using the -u option, you will be able to include additional information such as the time and date:

# diff -u /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.backup

Now, depending on the size of the files in question and the speed of your server, it may take a few seconds (or even minutes) to complete the task, and yes, I do realize we were digressing from the context of log files, but in time, I think that you will find this command will prove to be very useful.

For example, you may want to compare the contents of two folders using the –rq option to make it recursive like this:

# diff –rq /path/to/folder1 /path/to/folder2

To learn more about the diff command, simply review the manual by typing:

$ man diff

Using truncation

So, having been shown how easy it is to work with log files, we should always be mindful that records like this do grow in size, and for this precise reason, they can become difficult to work with as time passes. In fact, you should be aware, oversized log files can impact the system's performance. With this in mind, it is a good idea to monitor any log rotation process and adjust it (on a regular basis) according to need.

Moreover, where log rotation can be critical for a medium- to high-load environment, I would suggest that you manage this solution effectively. However, in situations where the effect of this process proves negligible, the following fail-safe technique will enable you to scrub a log file clean by typing either one of the following commands:

# cat /dev/null > /path/to/file

Or more appropriately, you can simply use the truncate command like this:

# truncate --size 0 /path/to/file

This process is known as truncation, and as mentioned, this should remain something of a last resort, as the preceding command will remove all the data contained within the file in question. So remember, if the file contains important information that you may need to review at some time in the future, back it up before you use truncate.

Summary

This chapter was intended to provide an introduction to the subject of troubleshooting CentOS 7 without the intention of burdening you with yet another list of rules, instructions, or procedures that would ill-suit your circumstances or immediate needs. As we know, troubleshooting is a journey, and where the first chapter has served to introduce you to a selection of concepts and methods, every page that follows will ensure that you are one step closer to being at ease with the server you are about to diagnose and repair. So yes, the journey has just begun, and we will now approach the subject of troubleshooting active processes.

Left arrow icon Right arrow icon

Description

It is assumed that you will already have a server up and running, you have a good working knowledge of CentOS, and you are comfortable with the concept of working with those services used by your server.

Who is this book for?

It is assumed that you will already have a server up and running, you have a good working knowledge of CentOS, and you are comfortable with the concept of working with those services used by your server.

What you will learn

  • Consider the need to understand, manipulate, and make use of the relevant system log files
  • Analyze, review, and make decisions regarding how and what to do with troublesome active processes on a CentOS server
  • Discover how to approach issues regarding the network environment
  • Approach issues regarding package management and learn how to make the necessary steps to diagnose and fix the problems found in relation to their YUM and RPMbased needs
  • Diagnose and troubleshoot issues related to Samba, NFS, and various external storage methods
  • Diagnose and troubleshoot issues related to iptables, SELinux, some common firewalls, shell access, and SSH

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 24, 2015
Length: 190 pages
Edition : 1st
Language : English
ISBN-13 : 9781785289828
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jun 24, 2015
Length: 190 pages
Edition : 1st
Language : English
ISBN-13 : 9781785289828
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
R$50 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
R$500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts
R$800 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total R$ 797.97
Mastering CentOS 7 Linux Server
R$306.99
CentOS 7 Linux Server Cookbook, Second Edition
R$306.99
Troubleshooting CentOS
R$183.99
Total R$ 797.97 Stars icon
Banner background image

Table of Contents

11 Chapters
1. Basics of Troubleshooting CentOS Chevron down icon Chevron up icon
2. Troubleshooting Active Processes Chevron down icon Chevron up icon
3. Troubleshooting the Network Environment Chevron down icon Chevron up icon
4. Troubleshooting Package Management and System Upgrades Chevron down icon Chevron up icon
5. Troubleshooting Users, Directories, and Files Chevron down icon Chevron up icon
6. Troubleshooting Shared Resources Chevron down icon Chevron up icon
7. Troubleshooting Security Issues Chevron down icon Chevron up icon
8. Troubleshooting Database Services Chevron down icon Chevron up icon
9. Troubleshooting Web Services Chevron down icon Chevron up icon
10. Troubleshooting DNS Services Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(3 Ratings)
5 star 33.3%
4 star 33.3%
3 star 33.3%
2 star 0%
1 star 0%
Amazon Customer Mar 12, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Awesome book!
Amazon Verified review Amazon
Ed P Jul 10, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Jonathan Hobson's book is a handy intro to troubleshooting a variety of issues that CentOS system admins may encounter. In addition he covers some basic information on installing and configuring some of the more popular Linux services, like MySQL/MariaDB, file shares and DNS services. While not exhaustive in any one area, it does provide a place to start when trying to resolve problems on the operating system.
Amazon Verified review Amazon
Human Jan 29, 2017
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
This book mainly seems to be composed of various basic Linux commands with a few examples. I have a few months experience supporting small web servers (mainly VPS's) with no prior server experience and I don't feel that I learned a lot from this, though it probably would have been an OK way to get started when I was first trying to learn if this wasn't covered in the training I got.I need much more detail than this for work.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.