Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Bash Cookbook
Bash Cookbook

Bash Cookbook: Leverage Bash scripting to automate daily tasks and improve productivity

Arrow left icon
Profile Icon Brash Profile Icon Ganesh Sanjiv Naik
Arrow right icon
€32.99
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1 (1 Ratings)
Paperback Jul 2018 264 pages 1st Edition
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Brash Profile Icon Ganesh Sanjiv Naik
Arrow right icon
€32.99
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1 (1 Ratings)
Paperback Jul 2018 264 pages 1st Edition
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Bash Cookbook

Crash Course in Bash

The primary purpose of this chapter is to give you enough knowledge about the Linux shell/Bash to get you up and running, as that the remainder of the book will just fall into place.

In this chapter, we will cover the following topics:

  • Getting started with Bash and CLI fundamentals
  • Creating and using basic variables
  • Hidden Bash variables and reserved words
  • Conditional logic using if, else, and elseif
  • Case/switch statements and loop constructs
  • Using functions and parameters
  • Including source files
  • Parsing program input parameters
  • Standard in, standard out, and standard error
  • Linking commands using pipes
  • Finding more information about the commands used within Bash
This chapter will set you up with the basic knowledge needed to complete the recipes in the remaining chapters of the book.

Getting started with Bash and CLI fundamentals

First, we need to open a Linux terminal or shell. Depending on your flavor (distribution) of Linux, this will be done in one of several ways, but in Ubuntu, the easiest way is to navigate to the Applications menu and find one labeled terminal. The terminal or shell is the place where commands are entered by a user and executed in the same shell. Simply put, results (if any) are displayed, and the terminal will remain open, waiting for new commands to be entered. Once a shell has been opened, a prompt will appear, looking similar to the following:

rbrash@moon:~$

The prompt will be in the format of your username@YourComputersHostName followed by a delimiter. Throughout this cookbook, you will see commands with the user rbrash; this is short for the author's name (Ron Brash) and in you case, it will match your username.

It may also look similar to:

root@hostname #

The $ refers to a regular user and the # refers to root. In the Linux and Unix worlds, root refers to the root user, which is similar to the Windows Administrator user. It can be used to perform any manner of tasks, so caution should be used when using a user with root privileges. For example, the root user can access all files on the OS, and can also be used to delete any or all critical files used by the OS, which could render the system unusable or broken.

When a terminal or shell is run, the Bash shell is executed with a set of parameters and commands specific to the user's bash profile. This profile is often called the .bashrc and can be used to contain command aliases, shortcuts, environment variables, and other user enhancements, such as prompt colors. It is located at ~/.bashrc or ~/.bash_profile.

~ or ~/ is a shortcut for your user’s home directory. It is synonymous with /home/yourUserName/ , and for root, it is /root.

Your user's Bash shell also contains a history of all of the commands run by the user (located in ~/.bash_history), which can be accessed using the history command, shown as follows:

rbrash@moon:~$ history
1002 ls
1003 cd ../
1004 pwd
1005 whoami
1006 history

For example, your first command might be to use ls to determine the contents of the directory. The command cd is used to change the directory, to one directory in above the parent directory. The pwd command is used to return the complete path to the working directory (for example, where the terminal is currently navigated to).

Another command you may execute on the shell might be the whoami command, which will return the user currently logged in to the shell:

rbrash@moon:/$ whoami
rbrash
rbrash@moon:/$

Using the concept of entering commands, we can put those (or any) commands into a shell script. In its most simplistic representation, a shell script looks like the following:

#!/bin/bash
# Pound or hash sign signifies a comment (a line that is not executed)
whoami #Command returning the current username
pwd #Command returning the current working directory on the filesystem
ls # Command returning the results (file listing) of the current working directory
echo “Echo one 1”; echo “Echo two 2” # Notice the semicolon used to delimit multiple commands in the same execution.

The first line contains the path to the interpreter and tells the shell which interpreter to use when interpreting this script. The first line will always contain the shebang (#!) and the prefix to the path appended without a space:

#!/bin/bash

A script cannot execute by itself; it needs to be executed by a user or to be called by another program, the system, or another script. The execution of a script also requires it to have executable permissions, which can be granted by a user so that it can become executable; this can be done with the chmod command.

To add or grant basic executable permissions, use the following command:

$ chmod a+x script.sh

To execute the script, one of the following methods can be used:

$ bash script.sh          # if the user is currently in the same directory as the script
$ bash /path/to/script.sh # Full path

If the correct permissions are applied, and the shebang and Bash interpreter path is correct, you may alternatively use the following two commands to execute script.sh:

$ ./script.sh # if the user is currently in the same directory as the script
$ /path/to/script.sh # Full path

From the preceding command snippets, you might notice a few things regarding paths. The path to a script, file, or executable can be referred to using a relative address and a full path. Relative addressing effectively tells the interpreter to execute whatever may exist in the current directory or using the user's global shell $PATH variables. For example, the system knows that binaries or executable binaries are stored in /usr/bin, /bin/ and /sbin and will look there first. The full path is more concrete and hardcoded; the interpreter will try to use the complete path. For example, /bin/ls or /usr/local/bin/myBinary.

When you are looking to run a binary in the directory you are currently working in, you can use either ./script.sh, bash script.sh, or even the full path. Obviously, there are advantages and disadvantages to each approach.

Hardcoded or full paths can be useful when you know exactly where a binary may reside on a specific system and you cannot rely on $PATH variables for potential security or system configuration reasons.
Relative paths are useful when flexibility is required. For example, program ABC could be in location /usr/bin or in /bin, but it could be called simply with ABC instead of /pathTo/ABC.

So far, we have covered what a basic Bash script looks like, and briefly introduced a few very basic, but essential commands and paths. However, to create a script—you need an editor! In Ubuntu, usually by default, you have a few editors available to you for the creation of a Bash script: vi/vim, nano, and gedit. There are a number of other text editors or integrated development editors (IDEs) available, but this is a personal choice and up to the reader to find one they  like. All of the examples and recipes in this book can be followed regardless of the text editor chosen.

Without using a full-blown editor such as the popular Eclipse, Emacs or Geany may also be useful as flexible IDEs within resource-constrained environments, for example, a Raspberry Pi.

Knowledge of vi/vim and nano is very handy when you want to create or modify a script remotely over SSH and on the console. Vi/vim may seem a bit archaic, but it saves the day when your favorite editor is not installed or cannot be accessed.

Your first Bash script with Vim

Let's start by creating a script using improved version of vi (called vim). If Vim (VI-enhanced) is not installed, it can be installed with sudo or root using the following command (-y is short for yes):

For Ubuntu or Debian based distributions
$
sudo apt-get -y install vim

For CentOS or RHEL
$ sudo yum install -y vim
For Fedora
$ sudo dnf install -y vim

Open a terminal and enter the following commands to first see where your terminal is currently navigated to, and to create the script using vim:

$ pwd
/home/yourUserName
$ vim my_first_script.sh

The terminal window will transform into the Vim application (similar to the following screenshot) and you will be just about ready to program your first script. Simultaneously press the Esc+ I keys to enter Insert mode; there will be an indicator in the bottom left and the cursor block will begin to flash:

To navigate Vim, you may use any number of keyboard shortcuts, but the arrow keys are the simplest to move the cursor up, down, left, and right. Move the cursor to the beginning of the first line and type the following:

#!/bin/bash
# Echo this is my first comment
echo "Hello world! This is my first Bash script!"
echo -n "I am executing the script with user: "
whoami
echo -n "I am currently running in the directory: "
pwd
exit 0

We have already introduced the concept of a comment and a few basic commands, but we have yet to introduce the flexible echo command. The echo command can be used to print text to the console or into files, and the -n flag prints text without the end line character (end line has the same effect as pressing Enter on the keyboard)—this allows the output from the whoami and pwd commands to appear on the same line. 

The program also exits with a status of 0, which means that it exited with a normal status. This will be covered later as we move toward searching or checking command exit statuses for errors and other conditions.

When you've finished, press Esc to exit insert mode; going back to command mode and typing : will allow you to write the vim command w + q. In summary, type the following key sequence: Esc and then :wq. This will exit Vim by writing to disk (w) and quitting (q), and will return you to the console.

More information about Vim can be obtained by reviewing its documentation using the Linux manual pages or by referring to a sibling book available from Packt (https://www.packtpub.com/application-development/hacking-vim-72).

To execute your first script, enter the bash my_first_script.sh command and the console will return a similar output:

$ bash my_first_script.sh 
Hello world! This is my first Bash script!
I am executing the script with user: rbrash
I am currently running in the directory: /home/rbrash
$

Congratulations—you have created and executed your first Bash script. With these skills, you can begin creating more complex scripts to automate and simplify just about any daily CLI routines.

Creating and using basic variables

The best way to think of variables is as placeholders for values. They can be permanent (static) or transient (dynamic), and they will have a concept called scope (more on this later). To get ready to use variables, we need to think about the script you just wrote: my_first_script.sh. In the script, we could have easily used variables to contain values that are static (there every time) or dynamic ones created by running commands every time the script is run. For example, if we would like to use a value such as the value of PI (3.14), then we could use a variable like this short script snippet:

PI=3.14
echo "The value of PI is $PI"

If included in a full script, the script snippet would output:

The value of Pi is 3.14

Notice that the idea of setting a value (3.14) to a variable is called assignment. We assigned the value of 3.14 to a variable with the name PI. We also referred to the PI variable using $PI. This can be achieved in a number of ways:

echo "1. The value of PI is $PI"
echo "2. The value of PI is ${PI}"
echo "3. The value of PI is" $PI

This will output the following:

1. The value of PI is 3.14
2. The value of PI is 3.14
3. The value of PI is 3.14

While the output is identical, the mechanisms are slightly different. In version 1, we refer to the PI variable within double quotes, which indicates a string (an array of characters). We could also use single quotes, but this would make this a literal string. In version 2, we refer to the variable inside of { } or squiggly brackets; this is useful for protecting the variable in cases where this would break the script. The following is an example:

echo "1. The value of PI is $PIabc" # Since PIabc is not declared, it will be empty string
echo "2. The value of PI is ${PI}" # Still works because we correctly referred to PI

If any variable is not declared and then we try to use it, that variable will be initialized to an empty string.

The following command will convert a numeric value to a string representation. In our example, $PI is still a variable containing a number, but we could have created the PI variable like this as well:

PI="3.14" # Notice the double quotes ""

This would contain within the variable a string and not a numeric value such as an integer or float.

The concept of data types is not explored to its fullest in this cookbook. It is best left as a topic for the reader to explore, as it is a fundamental concept of programming and computer usage.

Wait! You say there is a difference between a number and a string? Absolutely, because without conversion (or being set correctly in the first place), this may limit the things you can do with it. For example, 3.14 is not the same as 3.14 (the number). 3.14 is made up of four characters: 3 + . + 1 +4. If we wanted to perform multiplication on our PI value in string form, either the calculation/script would break or we would get a nonsensical answer.

We will talk more about conversion later, in Chapter 2, Acting like a Typewriter and File Explorer.

Let's say we want to assign one variable to another. We would do this like so:

VAR_A=10
VAR_B=$VAR_A
VAR_C=${VAR_B}

If the preceding snippet were within a functioning Bash script, we would get the value 10 for each variable.

Hands-on variable assignment

Open a new blank file and add the following to it:

#!/bin/bash

PI=3.14
VAR_A=10
VAR_B=$VAR_A
VAR_C=${VAR_B}

echo "Let's print 3 variables:"
echo $VAR_A
echo $VAR_B
echo $VAR_C

echo "We know this will break:"
echo "0. The value of PI is $PIabc" # since PIabc is not declared, it will be empty string

echo "And these will work:"
echo "1. The value of PI is $PI"
echo "2. The value of PI is ${PI}"
echo "3. The value of PI is" $PI

echo "And we can make a new string"
STR_A="Bob"
STR_B="Jane"
echo "${STR_A} + ${STR_B} equals Bob + Jane"
STR_C=${STR_A}" + "${STR_B}
echo "${STR_C} is the same as Bob + Jane too!"
echo "${STR_C} + ${PI}"

exit 0
Notice the nomenclature. It is great to use a standardized mechanism to name variables, but to use STR_A and VAR_B is clearly not descriptive enough if used multiple times. In the future, we will use more descriptive names, such as VAL_PI to mean the value of PI or STR_BOBNAME to mean the string representing Bob's name. In Bash, capitalization is often used to describe variables, as it adds clarity.

Press Save and exit to a terminal (open one if one isn't already open). Execute your script after applying the appropriate permissions, and you should see the following output:

Lets print 3 variables:
10
10
10
We know this will break:
0. The value of PI is
And these will work:
1. The value of PI is 3.14
2. The value of PI is 3.14
3. The value of PI is 3.14
And we can make a new string
Bob + Jane equals Bob + Jane
Bob + Jane is the same as Bob + Jane too!
Bob + Jane + 3.14

First, we saw how we can use three variables, assign values to each of then, and print them. Secondly, we saw through a demonstration that the interpreter can break when concatenating strings (let's keep this in mind). Thirdly, we printed out our PI variable and concatenated it to a string using echo. Finally, we performed a few more types of concatenation, including a final version, which converts a numeric value and appends it to a string.

Hidden Bash variables and reserved words

Wait—there are hidden variables and reserved words? Yes! There are words you can't use in your script unless properly contained in a construct such as a string. Global variables are available in a global context, which means that they are visible to all scripts in the current shell or open shell consoles. In a later chapter, we will explore global shell variables more, but just so you're aware, know that there are useful variables available for you to reuse, such as $USER, $PWD, $OLDPWD, and $PATH.

To see a list of all shell environment variables, you can use the env command (the output has been cut short):

$ env
XDG_VTNR=7
XDG_SESSION_ID=c2
CLUTTER_IM_MODULE=xim
XDG_GREETER_DATA_DIR=/var/lib/lightdm-data/rbrash
SESSION=ubuntu
SHELL=/bin/bash
TERM=xterm-256color
XDG_MENU_PREFIX=gnome-
VTE_VERSION=4205
QT_LINUX_ACCESSIBILITY_ALWAYS_ON=1
WINDOWID=81788934
UPSTART_SESSION=unix:abstract=/com/ubuntu/upstart-session/1000/1598
GNOME_KEYRING_CONTROL=
GTK_MODULES=gail:atk-bridge:unity-gtk-module
USER=rbrash
....
Modifying the PATH environment variable can be very useful. It can also be frustrating, because it contains the filesystem path to binaries. For example, you have binaries in /bin or /sbin or /usr/bin, but when you run a single command, the command is run without you specifying the path.

Alright, so we have acknowledged the existence of pre-existing variables and that there could be new global variables created by the user or other programs. When using variables that have a high probability of being similarly named, be careful to make them specific to your application.

In addition to hidden variables, there are also words that are reserved for use within a script or shell. For example, if and else are words that are used to provide conditional logic to scripts. Imagine if you created a command, variable, or function (more later on this) with the same name as one that already exists? The script would likely break or run an erroneous operation.

When trying to avoid any naming collisions (or namespace collisions), try to make your variables more likely to be used by your application by appending or prefixing an identifier that is likely to be unique.

The following list contains some of the more common reserved words that you will encounter. Some of which are likely to look very familiar because they tell the Bash interpreter to interpret any text in a specific way, redirect output, run an application in the background, or are even used in other programming/scripting languages.

  • if, elif, else, fi
  • while, do, for, done, continue, break
  • case, select, time
  • function
  • &, |, >, <, !, =
  • #, $, (, ), ;, {, }[, ], \

The last element in the list contains an array of specific characters that tell Bash to perform specific functionalities. The pound sign signifies a comment for example. However, the backslash \ is very special because it is an escape character. Escape characters are used to escape or stop the interpreter from executing specific functionality when it sees those particular characters. For example:

$ echo # Comment

$ echo \# Comment
# Comment

Escaping characters will become very useful in Chapter 2, Acting like a Typewriter and File Explorer, when working with strings and single/double quotes.

The escape character prevents the execution of the next character after the forward slash. However, this is not necessarily consistent when working with carriage returns (\n, \r\n) and null bytes (\0).

Conditional logic using if, else, and elseif

The previous section introduced the concept that there are several reserved words and a number of characters that have an effect on the operation of Bash. The most basic, and probably most widely used conditional logic is with if and else statements. Let's use an example code snippet:

#!/bin/bash
AGE=17
if [ ${AGE} -lt 18 ]; then
echo "You must be 18 or older to see this movie"
fi
Notice the space after or before the square brackets in the if statement. Bash is particularly picky about the syntax of bracketing.

If we are evaluating the variable age using less than (<) or -lt (Bash offers a number of syntactical constructs for evaluating variables), we need to use an if statement. In our if statement, if $AGE is less than 18, we echo the message You must be 18 or older to see this movie. Otherwise, the script will not execute the echo statement and will continue execution. Notice that the if statement ends with the reserved word fi. This is not a mistake and is required by Bash syntax.

Let's say we want to add a catchall using else. If the then command block of the if statement is not satisfied, then the else will be executed:

#!/bin/bash
AGE=40
if [ ${AGE} -lt 18 ]
then
echo "You must be 18 or older to see this movie"
else
echo "You may see the movie!"
exit 1
fi

With AGE set to the integer value 40, the then command block inside the if statement will not be satisfied and the else command block will be executed.

Evaluating binary numbers

Let's say we want to introduce another if condition and use elif (short for else if):

#!/bin/bash
AGE=21
if [ ${AGE} -lt 18 ]; then
echo "You must be 18 or older to see this movie"
elif [ ${AGE} -eq 21 ]; then
echo "You may see the movie and get popcorn"
else
echo "You may see the movie!"
exit 1
fi

echo "This line might not get executed"

If AGE is set and equals 21, then the snippet will echo:

You may see the movie and get popcorn
This line might not get executed

Using if, elif, and else, combined with other evaluations, we can execute specific branches of logic and functions or even exit our script. To evaluate raw binary variables, use the following operators:

  • -gt (greater than >)
  • -ge (greater or equal to >=)
  • -lt (less than <)
  • -le (less than or equal to <=)
  • -eq (equal to)
  • -nq (not equal to)

Evaluating strings

As mentioned in the variables subsection, numeric values are different from strings. Strings are typically evaluated like this:

#!/bin/bash
MY_NAME="John"
NAME_1="Bob"
NAME_2="Jane"
NAME_3="Sue"
Name_4="Kate"

if [ "${MY_NAME}" == "Ron" ]; then
echo "Ron is home from vacation"
elif [ "${MY_NAME}" != ${NAME_1}" && "${MY_NAME}" != ${NAME_2}" && "${MY_NAME}" == "John" ]; then
echo "John is home after some unnecessary AND logic"
elif [ "${MY_NAME}" == ${NAME_3}" || "${MY_NAME}" == ${NAME_4}" ]; then
echo "Looks like one of the ladies are home"
else
echo "Who is this stranger?"
fi

In the preceding snippet, you might notice that the MY_NAME variable will be executed and the string John is home after some unnecessary AND logic will be echoed to the console. In the snippet, the logic flows like this:

  1. If MY_NAME is equal to Ron, then echo "Ron is home from vacation"
  2. Else if MY_NAME is not equal to NAME_1 AND MY_NAME is not equal to NAME_2 AND MY_NAME is equal to John, then echo "John is home after some unnecessary AND logic"
  3. Else if MY_NAME is equal to NAME_3 OR MY_NAME is equal to NAME_4, then echo "Looks like one of the ladies"
  4. Else echo "Who is this stranger?"

Notice the operators: &&, ||, ==, and != 

  • && (means and)
  • || (means or)
  • == (is equal to)
  • != (not equal to)
  • -n (is not null or is not set)
  • -z (is null and zero length)
Null means not set or empty in the world of computing. There are many different types of operators or tests that can be used in your scripts. For more information, check out: http://tldp.org/LDP/abs/html/comparison-ops.html and https://www.gnu.org/software/bash/manual/html_node/Shell-Arithmetic.html#Shell-Arithmetic
You can also evaluate numbers as if they are strings using (("$a" > "$b")) or [[ "$a" > "$b" ]]. Notice the usage of double parentheses and square brackets.

Nested if statements

If a single level of if statements is not enough and you would like to have additional logic within an if statement, you can create nested conditional statements. This can be done in the following way:

#!/bin/bash
USER_AGE=18
AGE_LIMIT=18
NAME="Bob" # Change to your username if you want to execute the nested logic
HAS_NIGHTMARES="true"

if [ "${USER}" == "${NAME}" ]; then
if [ ${USER_AGE} -ge ${AGE_LIMIT} ]; then
if [ "${HAS_NIGHTMARES}" == "true" ]; then
echo "${USER} gets nightmares, and should not see the movie"
fi
fi
else
echo "Who is this?"
fi

Case/switch statements and loop constructs

Besides if and else statements, Bash offers case or switch statements and loop constructs that can be used to simplify logic so that it is more readable and sustainable. Imagine creating an if statement with many elif evaluations. It would become cumbersome!

#!/bin/bash
VAR=10

# Multiple IF statements
if [ $VAR -eq 1 ]; then
echo "$VAR"
elif [ $VAR -eq 2]; then
echo "$VAR"
elif [ $VAR -eq 3]; then
echo "$VAR"
# .... to 10
else
echo "I am not looking to match this value"
fi
In a large number of blocks of conditional logic of if and elifs, each if and elif needs to be evaluated before executing a specific branch of code. It can be faster to use a case/switch statement, because the first match will be executed (and it looks prettier).

Basic case statement

Instead of if/else statements, you can use case statements to evaluate a variable. Notice that esac is case backwards and is used to exit the case statement similar to fi for if statements.

Case statements follow this flow:

case $THING_I_AM_TO_EVALUATE in
  1) # Condition to evaluate is number 1 (could be "a" for a string too!)
echo "THING_I_AM_TO_EVALUATE equals 1"
;; # Notice that this is used to close this evaluation
*) # * Signified the catchall (when THING_I_AM_TO_EVALUATE does not equal values in the switch)
echo "FALLTHOUGH or default condition"
esac # Close case statement

The following is a working example:

#!/bin/bash
VAR=10 # Edit to 1 or 2 and re-run, after running the script as is.
case $VAR in 1)
echo "1"
;;
2)
echo "2"
;;
*)
echo "What is this var?"
exit 1 esac

Basic loops

Can you imagine iterating through a list of files or a dynamic array and monotonously evaluating each and every one? Or waiting until a condition was true? For these types of scenarios, you may want to use a for loop, a do while loop, or an until loop to improve your script and make things easy. For loops, do while loops, and until loops may seem similar, but there are subtle differences between them.

For loop

The for loop is usually used when you have multiple tasks or commands to execute for each of the entries in an array or want to execute a given command on a finite number of items. In this example, we have an array (or list) containing three elements: file1, file2, and file3. The for loop will echo each element within FILES and exit the script:

#!/bin/bash

FILES=( "file1" "file2" "file3" )
for ELEMENT in ${FILES[@]}
do
echo "${ELEMENT}"
done

echo "Echo\'d all the files"

Do while loop

As an alternative, we have included the do while loop. It is similar to a for loop, but better suited to dynamic conditions, such as when you do not know when a value will be returned or performing a task until a condition is met. The condition within the square brackets is the same as an if statement:

#!/bin/bash
CTR=1
while [ ${CTR} -lt 9 ]
do
echo "CTR var: ${CTR}"
((CTR++)) # Increment the CTR variable by 1
done
echo "Finished"

Until loop

For completeness, we have included the until loop. It is not used very often and is almost the same as a do while loop. Notice that its condition and operation is consistent with incrementing a counter until a value is reached:

#!/bin/bash
CTR=1
until [ ${CTR} -gt 9 ]
do
echo "CTR var: ${CTR}"
((CTR++)) # Increment the CTR variable by 1
done
echo "Finished"

Using functions and parameters

So far in the book, we have mentioned that function is a reserved word and only used in Bash scripts that are in a single procedure, but what is a function?

To illustrate what a function is, first we need to define what a function is—a function is a self-contained section of code that performs a single task. However, a function performing a task may also execute many subtasks in order to complete its main task.

For example, you could have a function called file_creator that performs the following tasks:

  1. Check to see whether a file exists.
  2. If the file exists, truncate it. Otherwise, create a new one.
  3. Apply the correct permissions. 

A function can also be passed parameters. Parameters are like variables that can be set outside of a function and then used within the function itself. This is really useful because we can create segments of code that perform generic tasks that are reusable by other scripts or even within loops themselves. You may also have local variables that are not accessible outside of a function and for usage only within the function itself. So what does a function look like?

#!/bin/bash
function my_function() {
local PARAM_1="$1"
local PARAM_2="$2"
local PARAM_3="$3"
echo "${PARAM_1} ${PARAM_2} ${PARAM_3}"
}
my_function "a" "b" "c"

As we can see in the simple script, there is a function declared as my_function using the function reserved word. The content of the function is contained within the squiggly brackets {} and introduces three new concepts:

  • Parameters are referred to systematically like this: $1 for parameter 1, $2 for parameter 2, $3 for parameter 3, and so on
  • The local keyword refers to the fact that variables declared with this keyword remain accessible only within this function
  • We can call functions merely by name and use parameters simply by adding them, as in the preceding example

In the next section, we'll dive into a more realistic example that should drive the point home a bit more: functions are helpful everyday and make functionality from any section easily reusable where appropriate.

Using a function with parameters within a for loop

In this short example, we have a function called create_file, which is called within a loop for each file in the FILES array. The function creates a file, modifies its permissions, and then passively checks for its existence using the ls command:

#!/bin/bash
FILES=( "file1" "file2" "file3" ) # This is a global variable

function create_file() {
local FNAME="${1}" # First parameter
local PERMISSIONS="${2}" # Second parameter
touch "${FNAME}"
chmod "${PERMISSIONS}" "${FNAME}"
ls -l "${FNAME}"
}

for ELEMENT in ${FILES[@]}
do
create_file "${ELEMENT}" "a+x"
done

echo "Created all the files with a function!"
exit 0

Including source files

In addition to functions, we can also create multiple scripts and include them such that we can utilize any shared variables of functions.

Let's say we have a library or utility script that contains a number of functions useful for creating files. This script by itself could be useful or reusable for a number of scripting tasks, so we make it program neutral. Then, we have another script, but this one is dedicated to a single task: performing useless file system operations (IO). In this case, we would have two files:

  1. io_maker.sh (which includes library.sh and uses library.sh functions)
  2. library.sh (which contains declared functions, but does not execute them)

The io_maker.sh script simply imports or includes the library.sh script and inherits knowledge of any global variables, functions, and other inclusions. In this manner, io_maker.sh effectively thinks that these other available functions are its own and can execute them as if they were contained within it.

Including/importing a library script and using external functions

To prepare for this example, create the following two files and open both:

  • io_maker.sh
  • library.sh

Inside library.sh, add the following:

#!/bin/bash

function create_file() {
local FNAME=$1
touch "${FNAME}"
ls "${FNAME}" # If output doesn't return a value - file is missing
}

function delete_file() {
local FNAME=$1
rm "${FNAME}"
ls "${FNAME}" # If output doesn't return a value - file is missing
}

Inside io_maker.sh, add the following:

#!/bin/bash

source library.sh # You may need to include the path as it is relative
FNAME="my_test_file.txt"
create_file "${FNAME}"
delete_file "${FNAME}"

exit 0

When you run the script, you should get the same output:

$ bash io_maker.sh
my_test_file.txt
ls: cannot access 'my_test_file.txt': No such file or directory

Although not obvious, we can see that both functions are executed. The first line of output is the ls command, successfully finding my_test_file.txt after creating the file in create_file(). In the second line, we can see that ls returns an error when we delete the file passed in as a parameter. 

Unfortunately, up until now, we have only been able to create and call functions, and execute commands. The next step, discussed in the next section, is to retrieve commands and function return codes or strings.

Retrieving return codes and output

Up until now, we have been using a command called exit intermittently to exit scripts. For those of you who are curious, you may have already scoured the web to find out what this command does, but the key concept to remember is that every script, command, or binary exits with a return code. Return codes are numeric and are limited to being between 0-255 because an unsigned 8-bit integer is used. If you use a value of -1, it will return 255.

Okay, so return codes are useful in which ways? Return codes are useful when you want to know whether you found a match when performing a match (for example), and whether the command was completely successfully or there was an error. Let's dig into a real example using the ls command on the console:

$ ls ~/this.file.no.exist
ls: cannot access '/home/rbrash/this.file.no.exist': No such file or directory
$ echo $?
2
$ ls ~/.bashrc
/home/rbrash/.bashrc
$ echo $?
0

Notice the return values? 0 or 2 in this example mean either success (0) or that there are errors (1 and 2). These are obtained by retrieving the $? variable and we could even set it to a variable like this:

$ ls ~/this.file.no.exist
ls: cannot access '/home/rbrash/this.file.no.exist': No such file or directory
$ TEST=$?
$ echo $TEST
2

From this example, we now know what return codes are, and how we can use them to utilize results returned from functions, scripts, and commands.

Return code 101

Dig into your terminal and create the following Bash script:

#!/bin/bash
GLOBAL_RET=255

function my_function_global() {
ls /home/${USER}/.bashrc
GLOBAL_RET=$?
}
function my_function_return() {
ls /home/${USER}/.bashrc
return $?
}
function my_function_str() {
local UNAME=$1
local OUTPUT=""
if [ -e /home/${UNAME}/.bashrc ]; then
OUTPUT='FOUND IT'
else
OUTPUT='NOT FOUND'
fi
echo ${OUTPUT}
}

echo "Current ret: ${GLOBAL_RET}"
my_function_global "${USER}"
echo "Current ret after: ${GLOBAL_RET}"
GLOBAL_RET=255
echo "Current ret: ${GLOBAL_RET}"
my_function_return "${USER}"
GLOBAL_RET=$?
echo "Current ret after: ${GLOBAL_RET}"

# And for giggles, we can pass back output too!
GLOBAL_RET=""
echo "Current ret: ${GLOBAL_RET}"
GLOBAL_RET=$(my_function_str ${USER})
# You could also use GLOBAL_RET=`my_function_str ${USER}`
# Notice the back ticks "`"
echo "Current ret after: $GLOBAL_RET"
exit 0

The script will output the following before exiting with a return code of 0 (remember that ls returns 0 if run successfully):

rbrash@moon:~$ bash test.sh
Current ret: 255
/home/rbrash/.bashrc
Current ret after: 0
Current ret: 255
/home/rbrash/.bashrc
Current ret after: 0
Current ret:
Current ret after: FOUND IT
$

In this section, there are three functions that leverage three concepts:

  1. my_function_global uses a global variable to return the command's return code
  2. my_function_return uses the reserved word, returnand a value (the command's return code)
  3. my_function_str uses a fork (a special operation) to execute a command and get the output (our string, which is echoed)
For option 3, there are several ways to get a string back from a function, including using the eval keyword. However, when using fork, it is best to be aware of the resources it may consume when running the same command many times just to get the output.

Linking commands, pipes, and input/output

This section is probably one of the most important in the book because it describes a fundamental and powerful feature on Linux and Unix: the ability to use pipes and redirect input or output. By themselves, pipes are a fairly trivial feature - commands and scripts can redirect their output to files or commands. So what? This could be considered a massive understatement in the Bash scripting world, because pipes and redirection allow you to enhance commands with the functionality of other commands or features. 

Let's look into this with an example using commands called tail and grep. In this example, the user, Bob, wants to look at his logs in real time (live), but he only wants to find the entries related to the wireless interface. The name of Bob's wireless device can be found using the iwconfig command:

$ iwconfig
wlp3s0 IEEE 802.11abgn ESSID:"127.0.0.1-2.4ghz"
Mode:Managed Frequency:2.412 GHz Access Point: 18:D6:C7:FA:26:B1
Bit Rate=144.4 Mb/s Tx-Power=22 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Power Management:on
Link Quality=58/70 Signal level=-52 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:90 Missed beacon:0

The iwconfig command is deprecated now. The following commands also will give you wireless interface information:

$ iw dev                # This will give list of wireless interfaces
$ iw dev wlp3s0 link # This will give detailed information about particular wireless interface

Now that Bob knows his wireless card's identifying name (wlp3s0), Bob can search his system's logs. It is usually found within /var/log/messages. Using the tail command and the -F flag, which allows continuously outputting the logs to the console, Bob can now see all the logs for his system. Unfortunately, he would like to filter the logs using grep, such that only logs with the keyword wlp3s0 are visible.

Bob is faced with a choice: does he search the file continuously, or can he combine tail and grep together to get the results he desires? The answer is yes—using pipes!

$ tail -F /var/log/messages | grep wlp3s0
Nov 10 11:57:13 moon kernel: wlp3s0: authenticate with 18:d6:c7:fa:26:b1
Nov 10 11:57:13 moon kernel: wlp3s0: send auth to 18:d6:c7:fa:26:b1 (try 1/3)
Nov 10 11:57:13 moon kernel: wlp3s0: send auth to 18:d6:c7:fa:26:b1 (try 2/3)
...

As new logs come in, Bob can now monitor them in real time and can stop the console output using Ctrl+C.

Using pipes, we can combine commands into powerful hybrid commands, extending the best features of each command into one single line. Remember pipes!

The usage and flexibility of pipes should be relatively straightforward, but what about directing the input and output of commands? This requires the introduction of three commands to get information from one place to another:

  • stdin (standard in)
  • stdout (standard out)
  • stderr (standard error)

If we are thinking about a single program, stdin is anything that can be provided to it, usually either as a parameter or a user input, using read for example. Stdout and stderr are two streams where output can be sent. Usually, output for both is sent to the console for display, but what if you only want the errors within the stderr stream to go to a file?

$ ls /filethatdoesntexist.txt 2> err.txt
$ ls ~/ > stdout.txt
$ ls ~/ > everything.txt 2>&1 # Gets stderr and stdout
$ ls ~/ >> everything.txt 2>&1 # Gets stderr and stdout
$ cat err.txt
ls: cannot access '/filethatdoesntexist.txt': No such file or directory
$ cat stdout.txt
.... # A whole bunch of files in your home directory

When we cat err.txt, we can see the error output from the stderr stream. This is useful when you only want to record errors and not everything being output to the console. The key feature to observe from the snippet is the usage of >, 2>, and 2>&1. With the arrows we can redirect the output to any file or even to other programs!

Take note of the difference between a single > and double >>. A single > will truncate any file that will have output directed to it, while >> will append any file.
There is a common error when redirecting both stderr and stdout to the same file. Bash should pick up the output to a file first, and then the duplication of the output file descriptors. For more information on file descriptors, see: https://en.wikipedia.org/wiki/File_descriptor
# This is correct
ls ~/ > everything.txt 2>&1
# This is erronous
ls ~/ 2>&1> everything.txt

Now that we know the basics of one of the most powerful features available in Bash, let's try an example—redirection and pipes bonzanza.

Redirection and pipe bonzanza

Open a shell and create a new bash file in your favorite editor:

#!/bin/sh

# Let's run a command and send all of the output to /dev/null
echo "No output?"
ls ~/fakefile.txt > /dev/null 2>&1

# Retrieve output from a piped command
echo "part 1"
HISTORY_TEXT=`cat ~/.bashrc | grep HIST`
echo "${HISTORY_TEXT}"

# Output the results to history.config
echo "part 2"
echo "${HISTORY_TEXT}" > "history.config"

# Re-direct history.config as input to the cat command
cat < history.config

# Append a string to history.config
echo "MY_VAR=1" >> history.config

echo "part 3 - using Tee"
# Neato.txt will contain the same information as the console
ls -la ~/fakefile.txt ~/ 2>&1 | tee neato.txt

First, ls is a way of producing an error and, instead of pushing erroneous output to the console, it is instead redirected to a special device in Linux called /dev/null/dev/null is particularly useful as it is a dump for any input that will not be used again. Then, we combine the cat command with grep to find any lines of text with a pipe and use a fork to capture the output to a variable (HISTORY_TEXT).

Then, we echo the contents of HISTORY_TEXT to a file (history.configusing a stdout redirect. Using the history.configfile, we redirect cat to use the raw file—this will be displayed on the console.

Using a double >>, we append an arbitrary string to the history.config file.

Finally, we end the script with redirection for both stdout and stderr, a pipe,, and the tee command. The tee command is useful because it can be used to display content even if it has been redirected to a file (as we just demonstrated).

Getting program input parameters 

Retrieving program input parameters or arguments is very similar to function parameters at the most basic level. They can be accessed in the same fashion as $1 (arg1), $2 (arg2), $3 (arg3), and so on. However, so far, we have seen a concept called flags, which allows you to perform neat things such as-l, --long-version, -v 10, --verbosity=10. Flags are effectively a user-friendly way to pass parameters or arguments to a program at runtime. For example:

bash myProgram.sh -v 99 --name=Ron -l Brash

Now that you know what flags are and how they can be helpful to improve your script, use the following section as a template.

Passing your program flags

After going into your shell and opening a new file in your favorite editor, let's get started by creating a Bash script that does the following:

  • When no flags or arguments are specified, prints out a help message
  • When either the -h or --help flags are set, it prints out a help message
  • When the -f or --firstname flags are set, it sets the the first name variable
  • When the -l or --lastname flags are set, it sets the the last name variable
  • When both the firstname and lastname flags are set, it prints a welcome message and returns without error

In addition to the basic logic, we can see that the code leverages a piece of functionality called getopts. Getopts allows us to grab the program parameter flags for use within our program. There are also primitives, which we have learned as well—conditional logic, while loop, and case/switch statements. Once a script develops into more than a simple utility or provides more than a single function, the more basic Bash constructs will become commonplace.

#!/bin/bash

HELP_STR="usage: $0 [-h] [-f] [-l] [--firstname[=]<value>] [--lastname[=]<value] [--help]"

# Notice hidden variables and other built-in Bash functionality
optspec=":flh-:"
while getopts "$optspec" optchar; do
case "${optchar}" in
-)
case "${OPTARG}" in
firstname)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
FIRSTNAME="${val}"
;;
lastname)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
LASTNAME="${val}"
;;
help)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
;;
*)
if [ "$OPTERR" = 1 ] && [ "${optspec:0:1}" != ":" ]; then
echo "Found an unknown option --${OPTARG}" >&2
fi
;;
esac;;
f)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
FIRSTNAME="${val}"
;;
l)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
LASTNAME="${val}"
;;
h)
echo "${HELP_STR}" >&2
exit 2
;;
*)
if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
echo "Error parsing short flag: '-${OPTARG}'" >&2
exit 1
fi

;;
esac
done

# Do we have even one argument?
if [ -z "$1" ]; then
echo "${HELP_STR}" >&2
exit 2
fi

# Sanity check for both Firstname and Lastname
if [ -z "${FIRSTNAME}" ] || [ -z "${LASTNAME}" ]; then
echo "Both firstname and lastname are required!"
exit 3
fi

echo "Welcome ${FIRSTNAME} ${LASTNAME}!"

exit 0

When we execute the preceding program, we should expect responses similar to the following:

$ bash flags.sh 
usage: flags.sh [-h] [-f] [-l] [--firstname[=]<value>] [--lastname[=]<value] [--help]
$ bash flags.sh -h
usage: flags.sh [-h] [-f] [-l] [--firstname[=]<value>] [--lastname[=]<value] [--help]
$ bash flags.sh --fname Bob
Both firstname and lastname are required!
rbrash@moon:~$ bash flags.sh --firstname To -l Mater
Welcome To Mater!

Getting additional information about commands

As we progress, you may see this book use many commands extensively and without exhaustive explanations. Without polluting this entire book with an introduction to Linux and useful commands, there are a couple of commands available that are really handy: man and info.

The man command, or manual command, is quite extensive and even has multiple sections when the same entry exists in different categories. For the purposes of investigating executable programs or shell commands, category 1 is sufficient. Let's look at the entry for the mount command:

$ man mount
...
MOUNT(8) System Administration MOUNT(8)
NAME
mount - mount a filesystem
SYNOPSIS
mount [-l|-h|-V]
mount -a [-fFnrsvw] [-t fstype] [-O optlist]
mount [-fnrsvw] [-o options] device|dir
mount [-fnrsvw] [-t fstype] [-o options] device dir
DESCRIPTION
All files accessible in a Unix system are arranged in one big tree, the
file hierarchy, rooted at /. These files can be spread out over sev‐
eral devices. The mount command serves to attach the filesystem found
on some device to the big file tree. Conversely, the umount(8) command
will detach it again.
...
(Press 'q' to Quit)
$

Alternatively, there is the info command, which will give you information should info pages exist for the item you are looking for.

Getting used to the style of the man and info pages can easily save you time by allowing you to access information quickly, especially if you don't have the internet.

Summary

In this chapter, we introduced the concept of variables, types, and assignments. We also covered some basic Bash programming primitives for for loops, while, and switch statements. Later on, we learned what functions are, how they are used, and how to pass
parameters.

In the next chapter, we will learn about several bolt-on technologies to make Bash even more extensive.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Automate tedious and repetitive tasks
  • Create several novel applications ranging from a simple IRC logger to a Web Scraper
  • Manage your system efficiently by becoming a seasoned Bash user

Description

In Linux, one of the most commonly used and most powerful tools is the Bash shell. With its collection of engaging recipes, Bash Cookbook takes you through a series of exercises designed to teach you how to effectively use the Bash shell in order to create and execute your own scripts. The book starts by introducing you to the basics of using the Bash shell, also teaching you the fundamentals of generating any input from a command. With the help of a number of exercises, you will get to grips with the automation of daily tasks for sysadmins and power users. Once you have a hands-on understanding of the subject, you will move on to exploring more advanced projects that can solve real-world problems comprehensively on a Linux system. In addition to this, you will discover projects such as creating an application with a menu, beginning scripts on startup, parsing and displaying human-readable information, and executing remote commands with authentication using self-generated Secure Shell (SSH) keys. By the end of this book, you will have gained significant experience of solving real-world problems, from automating routine tasks to managing your systems and creating your own scripts.

Who is this book for?

The Bash Cookbook is for you if you are a power user or system administrator involved in writing Bash scripts in order to automate tasks. This book is also ideal if you are interested in learning how to automate complex daily tasks.

What you will learn

  • Understand the basics of Bash shell scripting on a Linux system
  • Gain working knowledge of how redirections and pipes interact
  • Retrieve and parse input or output of any command
  • Automate tasks such as data collection and creating and applying a patch
  • Create a script that acts like a program with different features
  • Customize your Bash shell and discover neat tricks to extend your programs
  • •Compile and install shell and log commands on your system s console using Syslog
Estimated delivery fee Deliver to Czechia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 31, 2018
Length: 264 pages
Edition : 1st
Language : English
ISBN-13 : 9781788629362
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Czechia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Jul 31, 2018
Length: 264 pages
Edition : 1st
Language : English
ISBN-13 : 9781788629362
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 94.97
Mastering Bash
€36.99
Bash Quick Start Guide
€24.99
Bash Cookbook
€32.99
Total 94.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Crash Course in Bash Chevron down icon Chevron up icon
Acting Like a Typewriter and File Explorer Chevron down icon Chevron up icon
Understanding and Gaining File System Mastery Chevron down icon Chevron up icon
Making a Script Behave Like a Daemon Chevron down icon Chevron up icon
Scripts for System Administration Tasks Chevron down icon Chevron up icon
Scripts for Power Users Chevron down icon Chevron up icon
Writing Bash to Win and Profit Chevron down icon Chevron up icon
Advanced Scripting Techniques Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
(1 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 100%
Sophie Sep 30, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Typos everywhere! Your example code doesn't even work!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela