Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-advanced-user-management
Packt
30 Dec 2015
20 min read
Save for later

Advanced User Management

Packt
30 Dec 2015
20 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article: Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 2169

article-image-mastering-centos-7-linux-server-0
Packt
30 Dec 2015
19 min read
Save for later

Mastering CentOS 7 Linux Server

Packt
30 Dec 2015
19 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article:   Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 2601

article-image-nginx-service
Packt
20 Oct 2015
15 min read
Save for later

Nginx service

Packt
20 Oct 2015
15 min read
In this article by Clement Nedelcu, author of the book, Nginx HTTP Server - Third Edition, we discuss the stages after having successfully built and installed Nginx. The default location for the output files is /usr/local/nginx. (For more resources related to this topic, see here.) Daemons and services The next step is obviously to execute Nginx. However, before doing so, it's important to understand the nature of this application. There are two types of computer applications—those that require immediate user input, thus running in the foreground, and those that do not, thus running in the background. Nginx is of the latter type, often referred to as daemon. Daemon names usually come with a trailing d and a couple of examples can be mentioned here—httpd, the HTTP server daemon, is the name given to Apache under several Linux distributions; named, the name server daemon; or crond the task scheduler—although, as you will notice, this is not the case for Nginx. When started from the command line, a daemon immediately returns the prompt, and in most cases, does not even bother outputting data to the terminal. Consequently, when starting Nginx you will not see any text appear on the screen, and the prompt will return immediately. While this might seem startling, it is on the contrary a good sign. It means the daemon was started correctly and the configuration did not contain any errors. User and group It is of utmost importance to understand the process architecture of Nginx and particularly the user and groups its various processes run under. A very common source of troubles when setting up Nginx is invalid file access permissions—due to a user or group misconfiguration, you often end up getting 403 Forbidden HTTP errors because Nginx cannot access the requested files. There are two levels of processes with possibly different permission sets: The Nginx master process: This should be started as root. In most Unix-like systems, processes started with the root account are allowed to open TCP sockets on any port, whereas other users can only open listening sockets on a port above 1024. If you do not start Nginx as root, standard ports such as 80 or 443 will not be accessible. Note that the user directive that allows you to specify a different user and group for the worker processes will not be taken into consideration for the master process. The Nginx worker processes: These are automatically spawned by the master process under the account you specified in the configuration file with the user directive. The configuration setting takes precedence over the configuration switch you may have specified at compile time. If you did not specify any of those, the worker processes will be started as user nobody, and group nobody (or nogroup depending on your OS). Nginx command-line switches The Nginx binary accepts command-line arguments to perform various operations, among which is controlling the background processes. To get the full list of commands, you may invoke the help screen using the following commands: [alex@example.com ~]$ cd /usr/local/nginx/sbin [alex@example.com sbin]$ ./nginx -h The next few sections will describe the purpose of these switches. Some allow you to control the daemon, some let you perform various operations on the application configuration. Starting and stopping the daemon You can start Nginx by running the Nginx binary without any switches. If the daemon is already running, a message will show up indicating that a socket is already listening on the specified port: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) […] [emerg]: still could not bind(). Beyond this point, you may control the daemon by stopping it, restarting it, or simply reloading its configuration. Controlling is done by sending signals to the process using the nginx -s command. Command Description nginx –s stop Stops the daemon immediately (using the TERM signal). nginx –s quit Stops the daemon gracefully (using the QUIT signal). nginx –s reopen Reopens the log files. nginx –s reload Reloads the configuration. Note that when starting the daemon, stopping it, or performing any of the preceding operations, the configuration file is first parsed and verified. If the configuration is invalid, whatever command you have submitted will fail, even when trying to stop the daemon. In other words, in some cases you will not be able to even stop Nginx if the configuration file is invalid. An alternate way to terminate the process, in desperate cases only, is to use the kill or killall commands with root privileges: [root@example.com ~]# killall nginx Testing the configuration As you can imagine, testing the validity of your configuration will become crucial if you constantly tweak your server setup . The slightest mistake in any of the configuration files can result in a loss of control over the service—you will then be unable to stop it via regular init control commands, and obviously, it will refuse to start again. Consequently, the following command will be useful to you in many occasions; it allows you to check the syntax, validity, and integrity of your configuration: [alex@example.com ~]$ /usr/local/nginx/sbin/nginx –t The –t switch stands for test configuration. Nginx will parse the configuration anew and let you know whether it is valid or not. A valid configuration file does not necessarily mean Nginx will start though, as there might be additional problems such as socket issues, invalid paths, or incorrect access permissions. Obviously, manipulating your configuration files while your server is in production is a dangerous thing to do and should be avoided when possible. The best practice, in this case, is to place your new configuration into a separate temporary file and run the test on that file. Nginx makes it possible by offering the –c switch: [alex@example.com sbin]$ ./nginx –t –c /home/alex/test.conf This command will parse /home/alex/test.conf and make sure it is a valid Nginx configuration file. When you are done, after making sure that your new file is valid, proceed to replacing your current configuration file and reload the server configuration: [alex@example.com sbin]$ cp -i /home/alex/test.conf /usr/local/nginx/conf/nginx.conf cp: erase 'nginx.conf' ? yes [alex@example.com sbin]$ ./nginx –s reload Other switches Another switch that might come in handy in many situations is –V. Not only does it tell you the current Nginx build version, but more importantly it also reminds you about the arguments that you used during the configuration step—in other words, the command switches that you passed to the configure script before compilation. [alex@example.com sbin]$ ./nginx -V nginx version: nginx/1.8.0 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) TLS SNI support enabled configure arguments: --with-http_ssl_module In this case, Nginx was configured with the --with-http_ssl_module switch only. Why is this so important? Well, if you ever try to use a module that was not included with the configure script during the precompilation process, the directive enabling the module will result in a configuration error. Your first reaction will be to wonder where the syntax error comes from. Your second reaction will be to wonder if you even built the module in the first place! Running nginx –V will answer this question. Additionally, the –g option lets you specify additional configuration directives in case they were not included in the configuration file: [alex@example.com sbin]$ ./nginx –g "timer_resolution 200ms"; Adding Nginx as a system service In this section, we will create a script that will transform the Nginx daemon into an actual system service. This will result in mainly two outcomes: the daemon will be controllable using standard commands, and more importantly, it will automatically be launched on system startup and stopped on system shutdown. System V scripts Most Linux-based operating systems to date use a System-V style init daemon. In other words, their startup process is managed by a daemon called init, which functions in a way that is inherited from the old System V Unix-based operating system. This daemon functions on the principle of runlevels, which represent the state of the computer. Here is a table representing the various runlevels and their signification: Runlevel State 0 System is halted 1 Single-user mode (rescue mode) 2 Multiuser mode, without NFS support 3 Full multiuser mode 4 Not used 5 Graphical interface mode 6 System reboot You can manually initiate a runlevel transition: use the telinit 0 command to shut down your computer or telinit 6 to reboot it. For each runlevel transition, a set of services are executed. This is the key concept to understand here: when your computer is stopped, its runlevel is 0. When you turn it on, there will be a transition from runlevel 0 to the default computer startup runlevel. The default startup runlevel is defined by your own system configuration (in the /etc/inittab file) and the default value depends on the distribution you are using: Debian and Ubuntu use runlevel 2, Red Hat and Fedora use runlevel 3 or 5, CentOS and Gentoo use runlevel 3, and so on—the list is long. So, in summary, when you start your computer running CentOS, it operates a transition from runlevel 0 to runlevel 3. That transition consists of starting all services that are scheduled for runlevel 3. The question is how to schedule a service to be started at a specific runlevel. For each runlevel, there is a directory containing scripts to be executed. If you enter these directories (rc0.d, rc1.d, to rc6.d), you will not find actual files, but rather symbolic links referring to scripts located in the init.d directory. Service startup scripts will indeed be placed in init.d, and links will be created by tools placing them in the proper directories. About init scripts An init script, also known as service startup script or even sysv script, is a shell script respecting a certain standard. The script controls a daemon application by responding to commands such as start, stop, and others, which are triggered at two levels. First, when the computer starts, if the service is scheduled to be started for the system runlevel, the init daemon will run the script with the start argument. The other possibility for you is to manually execute the script by calling it from the shell: [root@example.com ~]# service httpd start Or if your system does not come with the service command: [root@example.com ~]# /etc/init.d/httpd start The script must accept at least the start, stop, restart, force-reload, and status commands, as they will be used by the system to respectively start up, shut down, restart, forcefully reload the service, or inquire its status. However, to enlarge your field of action as a system administrator, it is often interesting to provide further options, such as a reload argument to reload the service configuration or a try-restart argument to stop and start the service again. Note that since service httpd start and /etc/init.d/httpd start essentially do the same thing, with the exception that the second command will work on all operating systems, we will make no further mention of the service command and will exclusively use the /etc/init.d/ method. Init script for Debian-based distributions We will thus create a shell script to start and stop our Nginx daemon and also to restart and reloading it. The purpose here is not to discuss Linux shell script programming, so we will merely provide the source code of an existing init script, along with some comments to help you understand it. Due to differences in the format of the init scripts from one distribution to another, we will discover two separate scripts here. The first one is meant for Debian-based distributions such as Debian, Ubuntu, Knoppix, and so forth. First, create a file called nginx with the text editor of your choice, and save it in the /etc/init.d/ directory (on some systems, /etc/init.d/ is actually a symbolic link to /etc/rc.d/init.d/). In the file you just created, insert the script provided in the code bundle supplied with this book. Make sure that you change the paths to make them correspond to your actual setup. You will need root permissions to save the script into the init.d directory. The complete init script for Debian-based distributions can be found in the code bundle. Init script for Red Hat–based distributions Due to the system tools, shell programming functions, and specific formatting that it requires, the preceding script is only compatible with Debian-based distributions. If your server is operated by a Red Hat–based distribution such as CentOS, Fedora, and many more, you will need an entirely different script. The complete init script for Red Hat–based distributions can be found in the code bundle. Installing the script Placing the file in the init.d directory does not complete our work. There are additional steps that will be required to enable the service. First, make the script executable. So far, it is only a piece of text that the system refuses to run. Granting executable permissions on the script is done with the chmod command: [root@example.com ~]# chmod +x /etc/init.d/nginx Note that if you created the file as the root user, you will need to be logged in as root to change the file permissions. At this point, you should already be able to start the service using service nginx start or /etc/init.d/nginx start, as well as stopping, restarting, or reloading the service. The last step here will be to make it so the script is automatically started at the proper runlevels. Unfortunately, doing this entirely depends on what operating system you are using. We will cover the two most popular families—Debian, Ubuntu, or other Debian-based distributions and Red Hat/Fedora/CentOS, or other Red Hat–derived systems. Debian-based distributions For the Debian-based distribution, a simple command will enable the init script for the system runlevel: [root@example.com ~]# update-rc.d -f nginx defaults This command will create links in the default system runlevel folders. For the reboot and shutdown runlevels, the script will be executed with the stop argument; for all other runlevels, the script will be executed with start. You can now restart your system and see your Nginx service being launched during the boot sequence. Red Hat–based distributions For the Red Hat–based systems family, the command differs, but you get an additional tool to manage system startup. Adding the service can be done via the following command: [root@example.com ~]# chkconfig nginx on Once that is done, you can then verify the runlevels for the service: [root@example.com ~]# chkconfig --list nginx Nginx 0:off 1:off 2:on 3:off 4:on 5:on 6:off Another tool will be useful to you to manage system services namely, ntsysv. It lists all services scheduled to be executed on system startup and allows you to enable or disable them at will. The tool ntsysv requires root privileges to be executed. Note that prior to using ntsysv, you must first run the chkconfig nginx on command, otherwise Nginx will not appear in the list of services. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed to you directly. NGINX Plus Since mid-2013, NGINX, Inc., the company behind the Nginx project, also offers a paid subscription called NGINX Plus. The announcement came as a surprise for the open source community, but several companies quickly jumped on the bandwagon and reported amazing improvements in terms of performance and scalability after using NGINX Plus. NGINX, Inc., the high performance web company, today announced the availability of NGINX Plus, a fully-supported version of the popular NGINX open source software complete with advanced features and offered with professional services. The product is developed and supported by the core engineering team at Nginx Inc., and is available immediately on a subscription basis. As business requirements continue to evolve rapidly, such as the shift to mobile and the explosion of dynamic content on the Web, CIOs are continuously looking for opportunities to increase application performance and development agility, while reducing dependencies on their infrastructure. NGINX Plus provides a flexible, scalable, uniformly applicable solution that was purpose built for these modern, distributed application architectures. Considering the pricing plans ($1,500 per year per instance) and the additional features made available, this platform is indeed clearly aimed at large corporations looking to integrate Nginx into their global architecture seamlessly and effortlessly. Professional support from the Nginx team is included and discounts can be offered for multiple-instance subscriptions. This book covers the open source version of Nginx only and does not detail advanced functionality offered by NGINX Plus. For more information about the paid subscription, take a look at http://www.nginx.com. Summary From this point on, Nginx is installed on your server and automatically starts with the system. Your web server is functional, though it does not yet answer the most basic functionality: serving a website. The first step towards hosting a website will be to prepare a suitable configuration file. Resources for Article: Further resources on this subject: Getting Started with Nginx[article] Fine-tune the NGINX Configuration[article] Nginx proxy module [article]
Read more
  • 0
  • 0
  • 1828
Banner background image

article-image-finding-useful-information
Packt
16 Oct 2015
22 min read
Save for later

Finding useful information

Packt
16 Oct 2015
22 min read
In this article written by Benjamin Cane, author of the book Red Hat Enterprise Linux Troubleshooting Guide the author goes on to explain how before starting to explore troubleshooting commands, we should first cover locations of useful information. Useful information is a bit of an ubiquitous term, pretty much every file, directory, or command can provide useful information. What he really plans to cover are places where it is possible to find information for almost any issue. (For more resources related to this topic, see here.) Log files Log files are often the first place to start looking for troubleshooting information. Whenever a service or server is experiencing an issue, checking the log files for errors can often answer many questions quickly. The default location By default, RHEL and most Linux distributions keep their log files in /var/log/, which is actually part of the Filesystem Hierarchy Standard (FHS) maintained by the Linux Foundation. However, while /var/log/ might be the default location not all log files are located there(http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). While /var/log/httpd/ is the default location for Apache logs, this location can be changed with Apache's configuration files. This is especially common when Apache was installed outside of the standard RHEL package. Like Apache, most services allow for custom log locations. It is not uncommon to find custom directories or file systems outside of /var/log created specifically for log files. Common log files The following table is a short list of common log files and a description of what you can find within them. Do keep in mind that this list is specific to Red Hat Enterprise Linux 7, and while other Linux distributions might follow similar conventions, they are not guaranteed. Log file Description /var/log/messages By default, this log file contains all syslog messages (except e-mail) of INFO or higher priority. /var/log/secure This log file contains authentication related message items such as: SSH logins User creations Sudo violations and privilege escalation /var/log/cron This log file contains a history of crond executions as well as start and end times of cron.daily, cron.weekly, and other executions. /var/log/maillog This log file is the default log location of mail events. If using postfix, this is the default location for all postfix-related messages. /var/log/httpd/ This log directory is the default location for Apache logs. While this is the default location, it is not a guaranteed location for all Apache logs. /var/log/mysql.log This log file is the default log file for mysqld. Much like the httpd logs, this is default and can be changed easily. /var/log/sa/ This directory contains the results of the sa commands that run every 10 minutes by default. For many issues, one of the first log files to review is the /var/log/messages log. On RHEL systems, this log file receives all system logs of INFO priority or higher. In general, this means that any significant event sent to syslog would be captured in this log file. The following is a sample of some of the log messages that can be found in /var/log/messages: Dec 24 18:03:51 localhost systemd: Starting Network Manager Script Dispatcher Service... Dec 24 18:03:51 localhost dbus-daemon: dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost systemd: Started Network Manager Script Dispatcher Service. Dec 24 18:06:06 localhost kernel: e1000: enp0s3 NIC Link is Down Dec 24 18:06:06 localhost kernel: e1000: enp0s8 NIC Link is Down Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (deferring action for 4 seconds) Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (deferring action for 4 seconds) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (calling deferred action) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (calling deferred action) Dec 24 18:06:12 localhost kernel: e1000: enp0s3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost kernel: e1000: enp0s8 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s3): link connected Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s8): link connected Dec 24 18:06:39 localhost kernel: atkbd serio0: Spurious NAK on isa0060/serio0. Some program might be trying to access hardware directly. Dec 24 18:07:10 localhost systemd: Starting Session 53 of user root. Dec 24 18:07:10 localhost systemd: Started Session 53 of user root. Dec 24 18:07:10 localhost systemd-logind: New session 53 of user root. As we can see, there are more than a few log messages within this sample that could be useful while troubleshooting issues. Finding logs that are not in the default location Many times log files are not in /var/log/, which can be either because someone modified the log location to some place apart from the default, or simply because the service in question defaults to another location. In general, there are three ways to find log files not in /var/log/. Checking syslog configuration If you know a service is using syslog for its logging, the best place to check to find which log file its messages are being written to is the rsyslog configuration files. The rsyslog service has two locations for configuration. The first is the /etc/rsyslog.d directory. The /etc/rsyslog.d directory is an include directory for custom rsyslog configurations. The second is the /etc/rsyslog.conf configuration file. This is the main configuration file for rsyslog and contains many of the default syslog configurations. The following is a sample of the default contents of /etc/rsyslog.conf: #### RULES #### # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron By reviewing the contents of this file, it is fairly easy to identify which log files contain the information required, if not, at least, the possible location of syslog managed log files. Checking the application's configuration Not every application utilizes syslog; for those that don't, one of the easiest ways to find the application's log file is to read the application's configuration files. A quick and useful method for finding log file locations from configuration files is to use the grep command to search the file for the word log: $ grep log /etc/samba/smb.conf # files are rotated when they reach the size specified with "max log size". # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 The grep command is a very useful command that can be used to search files or directories for specific strings or patterns. The simplest command can be seen in the preceding snippet where the grep command is used to search the /etc/samba/smb.conf file for any instance of the pattern "log". After reviewing the output of the preceding grep command, we can see that the configured log location for samba is /var/log/samba/log.%m. It is important to note that %m, in this example, is actually replaced with a "machine name" when creating the file. This is actually a variable within the samba configuration file. These variables are unique to each application but this method for making dynamic configuration values is a common practice. Other examples The following are examples of using the grep command to search for the word "log" in the Apache and MySQL configuration files: $ grep log /etc/httpd/conf/httpd.conf # ErrorLog: The location of the error log file. # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. ErrorLog "logs/error_log" $ grep log /etc/my.cnf # log_bin log-error=/var/log/mysqld.log In both instances, this method was able to identify the configuration parameter for the service's log file. With the previous three examples, it is easy to see how effective searching through configuration files can be. Using the find command The find command, is another useful method for finding log files. The find command is used to search a directory structure for specified files. A quick way of finding log files is to simply use the find command to search for any files that end in ".log": # find /opt/appxyz/ -type f -name "*.log" /opt/appxyz/logs/daily/7-1-15/alert.log /opt/appxyz/logs/daily/7-2-15/alert.log /opt/appxyz/logs/daily/7-3-15/alert.log /opt/appxyz/logs/daily/7-4-15/alert.log /opt/appxyz/logs/daily/7-5-15/alert.log The preceding is generally considered a last resort solution, and is mostly used when the previous methods do not produce results. When executing the find command, it is considered a best practice to be very specific about which directory to search. When being executed against very large directories, the performance of the server can be degraded. Configuration files As discussed previously, configuration files for an application or service can be excellent sources of information. While configuration files won't provide you with specific errors such as log files, they can provide you with critical information (for example, enabled/disabled features, output directories, and log file locations). Default system configuration directory In general, system, and service configuration files are located within the /etc/ directory on most Linux distributions. However, this does not mean that every configuration file is located within the /etc/ directory. In fact, it is not uncommon for applications to include a configuration directory within the application's home directory. So how do you know when to look in the /etc/ versus an application directory for configuration files? A general rule of thumb is, if the package is part of the RHEL distribution, it is safe to assume that the configuration is within the /etc/ directory. Anything else may or may not be present in the /etc/ directory. For these situations, you simply have to look for them. Finding configuration files In most scenarios, it is possible to find system configuration files within the /etc/ directory with a simple directory listing using the ls command: $ ls -la /etc/ | grep my -rw-r--r--. 1 root root 570 Nov 17 2014 my.cnf drwxr-xr-x. 2 root root 64 Jan 9 2015 my.cnf.d The preceding code snippet uses ls to perform a directory listing and redirects that output to grep in order to search the output for the string "my". We can see from the output that there is a my.cnf configuration file and a my.cnf.d configuration directory. The MySQL processes use these for its configuration. We were able to find these by assuming that anything related to MySQL would have the string "my" in it. Using the rpm command If the configuration files were deployed as part of a RPM package, it is possible to use the rpm command to identify configuration files. To do this, simply execute the rpm command with the –q (query) flag, and the –c (configfiles) flag, followed by the name of the package: $ rpm -q -c httpd /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.modules.d/00-base.conf /etc/httpd/conf.modules.d/00-dav.conf /etc/httpd/conf.modules.d/00-lua.conf /etc/httpd/conf.modules.d/00-mpm.conf /etc/httpd/conf.modules.d/00-proxy.conf /etc/httpd/conf.modules.d/00-systemd.conf /etc/httpd/conf.modules.d/01-cgi.conf /etc/httpd/conf/httpd.conf /etc/httpd/conf/magic /etc/logrotate.d/httpd /etc/sysconfig/htcacheclean /etc/sysconfig/httpd The rpm command is used to manage RPM packages and is a very useful command when troubleshooting. We will cover this command further as we explore commands for troubleshooting. Using the find command Much like finding log files, to find configuration files on a system, it is possible to utilize the find command. When searching for log files, the find command was used to search for all files where the name ends in ".log". In the following example, the find command is being used to search for all files where the name begins with "http". This find command should return at least a few results, which will provide configuration files related to the HTTPD (Apache) service: # find /etc -type f -name "http*" /etc/httpd/conf/httpd.conf /etc/sysconfig/httpd /etc/logrotate.d/httpd The preceding example searches the /etc directory; however, this could also be used to search any application home directory for user configuration files. Similar to searching for log files, using the find command to search for configuration files is generally considered a last resort step and should not be the first method used. The proc filesystem An extremely useful source of information is the proc filesystem. This is a special filesystem that is maintained by the Linux kernel. The proc filesystem can be used to find useful information about running processes, as well as other system information. For example, if we wanted to identify the filesystems supported by a system, we could simply read the /proc/filesystems file: $ cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cgroup nodev cpuset nodev tmpfs nodev devtmpfs nodev debugfs nodev securityfs nodev sockfs nodev pipefs nodev anon_inodefs nodev configfs nodev devpts nodev ramfs nodev hugetlbfs nodev autofs nodev pstore nodev mqueue nodev selinuxfs xfs nodev rpc_pipefs nodev nfsd This filesystem is extremely useful and contains quite a bit of information about a running system. The proc filesystem will be used throughout the troubleshooting steps. It is used in various ways while troubleshooting everything from specific processes, to read-only filesystems. Troubleshooting commands This section will cover frequently used troubleshooting commands that can be used to gather information from the system or a running service. While it is not feasible to cover every possible command, the commands used do cover fundamental troubleshooting steps for Linux systems. Command-line basics The troubleshooting steps used are primarily command-line based. While it is possible to perform many of these things from a graphical desktop environment, the more advanced items are command-line specific. As such, the reader has at least a basic understanding of Linux. To be more specific, we assumes that the reader has logged into a server via SSH and is familiar with basic commands such as cd, cp, mv, rm, and ls. For those who might not have much familiarity, I wanted to quickly cover some basic command-line usage that will be required. Command flags Many readers are probably familiar with the following command: $ ls -la total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c Most should recognize that this is the ls command and it is used to perform a directory listing. What might not be familiar is what exactly the –la part of the command is or does. To understand this better, let's look at the ls command by itself: $ ls app.c application app.py bomber.py index.html lookbusy-1.4 lookbusy-1.4.tar.gz lotsofiles The previous execution of the ls command looks very different from the previous. The reason for this is because the latter is the default output for ls. The –la portion of the command is what is commonly referred to as command flags or options. The command flags allow a user to change the default behavior of the command providing it with specific options. In fact, the –la flags are two separate options, –l and –a; they can even be specified separately: $ ls -l -a total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c We can see from the preceding snippet that the output of ls –la is exactly the same as ls –l –a. For common commands, such as the ls command, it does not matter if the flags are grouped or separated, they will be parsed in the same way. Will show both grouped and ungrouped. If grouping or ungrouping is performed for any specific reason it will be called out; otherwise, the grouping or ungrouping used for visual appeal and memorization. In addition to grouping and ungrouping, we will also show flags in their long format. In the previous examples, we showed the flag -a, this is known as a short flag. This same option can also be provided in the long format --all: $ ls -l --all total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c The –a and the --all flags are essentially the same option; it can simply be represented in both short and long form. One important thing to remember is that not every short flag has a long form and vice versa. Each command has its own syntax, some commands only support the short form, others only support the long form, but many support both. In most cases, the long and short flags will both be documented within the commands man page. Piping command output Another common command-line practice that will be used several times is piping output. Specifically, examples such as the following: $ ls -l --all | grep app -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c -rwxrwxr-x. 1 vagrant vagrant 29390 May 18 00:47 application -rw-rw-r--. 1 vagrant vagrant 1198 Jun 10 17:03 app.py In the preceding example, the output of the ls -l --all command is piped to the grep command. By placing | or the pipe character between the two commands, the output of the first command is "piped" to the input for the second command. The example preceding the ls command will be executed; with that, the grep command will then search that output for any instance of the pattern "app". Piping output to grep will actually be used quite often, as it is a simple way to trim the output into a maintainable size. Many times the examples will also contain multiple levels of piping: $ ls -la | grep app | awk '{print $4,$9}' vagrant app.c vagrant application vagrant app.py In the preceding code the output of ls -la is piped to the input of grep; however, this time, the output of grep is also piped to the input of awk. While many commands can be piped to, not every command supports this. In general, commands that accept user input from files or command-line also accept piped input. As with the flags, a command's man page can be used to identify whether the command accepts piped input or not. Gathering general information When managing the same servers for a long time, you start to remember key information about those servers. Such as the amount of physical memory, the size and layout of their filesystems, and what processes should be running. However, when you are not familiar with the server in question it is always a good idea to gather this type of information. The commands in this section are commands that can be used to gather this type of general information. w – show who is logged on and what they are doing Early in my systems administration career, I had a mentor who used to tell me I always run w when I log into a server. This simple tip has actually been very useful over and over again in my career. The w command is simple; when executed it will output information such as system uptime, load average, and who is logged in: # w 04:07:37 up 14:26, 2 users, load average: 0.00, 0.01, 0.05 USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash root pts/0 20:47 1.00s 0.21s 0.19s -bash This information can be extremely useful when working with unfamiliar systems. The output can be useful even when you are familiar with the system. With this command, you can see: When this system was last rebooted:04:07:37 up 14:26:This information can be extremely useful; whether it is an alert for a service like Apache being down, or a user calling in because they were locked out of the system. When these issues are caused by an unexpected reboot, the reported issue does not often include this information. By running the w command, it is easy to see the time elapsed since the last reboot. The load average of the system:load average: 0.00, 0.01, 0.05:The load average is a very important measurement of system health. To summarize it, the load average is the average number of processes in a wait state over a period of time. The three numbers in the output of w represent different times.The numbers are ordered from left to right as 1 minute, 5 minutes, and 15 minutes. Who is logged in and what they are running: USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash The final piece of information that the w command provides is users that are currently logged in and what command they are executing. This is essentially the same output as the who command, which includes the user logged in, when they logged in, how long they have been idle, and what command their shell is running. The last item in that list is extremely important. Oftentimes, when working with big teams, it is common for more than one person to respond to an issue or ticket. By running the w command immediately after login, you will see what other users are doing, preventing you from overriding any troubleshooting or corrective steps the other person has taken. rpm – RPM package manager The rpm command is used to manage Red Hat package manager (RPM). With this command, you can install and remove RPM packages, as well as search for packages that are already installed. We saw earlier how the rpm command can be used to look for configuration files. The following are several additional ways we can use the rpm command to find critical information. Listing all packages installed Often when troubleshooting services, a critical step is identifying the version of the service and how it was installed. To list all RPM packages installed on a system, simply execute the rpm command with -q (query) and -a (all): # rpm -q -a kpatch-0.0-1.el7.noarch virt-what-1.13-5.el7.x86_64 filesystem-3.2-18.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 hicolor-icon-theme-0.12-7.el7.noarch The rpm command is a very diverse command with many flags. In the preceding example the -q and -a flags are used. The -q flag tells the rpm command that the action being taken is a query; you can think of this as being put into a "search mode". The -a or --all flag tells the rpm command to list all packages. A useful feature is to add the --last flag to the preceding command, as this causes the rpm command to list the packages by install time with the latest being first. Listing all files deployed by a package Another useful rpm function is to show all of the files deployed by a specific package: # rpm -q --filesbypkg kpatch-0.0-1.el7.noarch kpatch /usr/bin/kpatch kpatch /usr/lib/systemd/system/kpatch.service In the preceding example, we again use the -q flag to specify that we are running a query, along with the --filesbypkg flag. The --filesbypkg flag will cause the rpm command to list all of the files deployed by the specified package. This example can be very useful when trying to identify a service's configuration file location. Using package verification In this third example, we are going to use an extremely useful feature of rpm, verify. The rpm command has the ability to verify whether or not the files deployed by a specified package have been altered from their original contents. To do this, we will use the -V (verify) flag: # rpm -V httpd S.5....T. c /etc/httpd/conf/httpd.conf In the preceding example, we simply run the rpm command with the -V flag followed by a package name. As the -q flag is used for querying, the -V flag is for verifying. With this command, we can see that only the /etc/httpd/conf/httpd.conf file was listed; this is because rpm will only output files that have been altered. In the first column of this output, we can see which verification checks the file failed. While this column is a bit cryptic at first, the rpm man page has a useful table (as shown in the following list) explaining what each character means: S: This means that the file size differs M: This means that the mode differs (includes permissions and file type) 5: This means that the digest (formerly MD5 sum) differs D: This means indicates the device major/minor number mismatch L: This means indicates the readLink(2) path mismatch U: This means that the user ownership differs G: This means that the group ownership differs T: This means that mTime differs P: This means that caPabilities differs Using this list we can see that the httpd.conf's file size, MD5 sum, and mtime (Modify Time) are not what was deployed by httpd.rpm. This means that it is highly likely that the httpd.conf file has been modified after installation. While the rpm command might not seem like a troubleshooting command at first, the preceding examples show just how powerful of a troubleshooting tool it can be. With these examples, it is simple to identify important files and whether or not those files have been modified from the deployed version. Summary Overall we learned that log files, configuration files, and the /proc filesystem are key sources of information during troubleshooting. We also covered the basic use of many fundamental troubleshooting commands. You also might have noticed that quite a few commands are also used in day-to-day life for nontroubleshooting purposes. While these commands might not explain the issue themselves, they can help gather information about the issue, which leads to a more accurate and quick resolution. Familiarity with these fundamental commands is critical to your success during troubleshooting. Resources for Article: Further resources on this subject: Linux Shell Scripting[article] Embedded Linux and Its Elements[article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 1221

article-image-creating-tfs-scheduled-jobs
Packt
28 Sep 2015
12 min read
Save for later

Creating TFS Scheduled Jobs

Packt
28 Sep 2015
12 min read
In this article by Gordon Beeming, the author of the book, Team Foundation Server 2015 Customization, we are going to cover TFS scheduled jobs. The topics that we are going to cover include: Writing a TFS Job Deploying a TFS Job Removing a TFS Job You would want to write a scheduled job for any logic that needs to be run at specific times, whether it is at certain increments or at specific times of the day. A scheduled job is not the place to put logic that you would like to run as soon as some other event, such as a check-in or a work item change, occurs. It will automatically link change sets to work items based on the comments. (For more resources related to this topic, see here.) The project setup First off, we'll start with our project setup. This time, we'll create a Windows console application. Creating a new windows console application The references that we'll need this time around are: Microsoft.VisualStudio.Services.WebApi.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.Framework.Server.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent on the TFS server. That's all the setup that is required for your TFS job project. Any class that inherit ITeamFoundationJobExtension will be able to be used for a TFS Job. Writing the TFS job So, as mentioned, we are going to need a class that inherits from ITeamFoundationJobExtension. Let's create a class called TfsCommentsToChangeSetLinksJob and inherit from ITeamFoundationJobExtension. As part of this, we will need to implement the Run method, which is part of an interface, like this: public class TfsCommentsToChangeSetLinksJob : ITeamFoundationJobExtension { public TeamFoundationJobExecutionResult Run( TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, out string resultMessage) { throw new NotImplementedException(); } } Then, we also add the using statement: using Microsoft.TeamFoundation.Framework.Server; Now, for this specific extension, we'll need to add references to the following: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent. Now, for the logic of our plugin, we use the following code inside of the Run method as a basic shell, where we'll then place the specific logic for this plugin. This basic shell will be adding a try catch block, and at the end of the try block, it will return a successful job run. We'll then add to the job message what exception may be thrown and returning that the job failed: resultMessage = string.Empty; try { // place logic here return TeamFoundationJobExecutionResult.Succeeded; } catch (Exception ex) { resultMessage += "Job Failed: " + ex.ToString(); return TeamFoundationJobExecutionResult.Failed; } Along with this code, you will need the following using function: using Microsoft.TeamFoundation; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Linq; using System.Text.RegularExpressions; So next, we need to place some logic specific to this job in the try block. First, let's create a connection to TFS for version control: TfsTeamProjectCollection tfsTPC = TfsTeamProjectCollectionFactory.GetTeamProjectCollection( new Uri("http://localhost:8080/tfs")); VersionControlServer vcs = tfsTPC.GetService<VersionControlServer>(); Then, we will query the work item store's history and get the last 25 check-ins: WorkItemStore wis = tfsTPC.GetService<WorkItemStore>(); // get the last 25 check ins foreach (Changeset changeSet in vcs.QueryHistory("$/", RecursionType.Full, 25)) { // place the next logic here } Now that we have the changeset history, we are going to check the comments for any references to work items using a simple regex expression: //try match the regex for a hash number in the comment foreach (Match match in Regex.Matches((changeSet.Comment ?? string.Empty), @"#d{1,}")) { // place the next logic here } Getting into this loop, we'll know that we have found a valid number in the comment and that we should attempt to link the check-in to that work item. But just the fact that we have found a number doesn't mean that the work item exists, so let's try find a work item with the found number: int workItemId = Convert.ToInt32(match.Value.TrimStart('#')); var workItem = wis.GetWorkItem(workItemId); if (workItem != null) { // place the next logic here } Here, we are checking to make sure that the work item exists so that if the workItem variable is not null, then we'll proceed to check whether a relationship for this changeSet and workItem function already exists: //now create the link ExternalLink changesetLink = new ExternalLink( wis.RegisteredLinkTypes[ArtifactLinkIds.Changeset], changeSet.ArtifactUri.AbsoluteUri); //you should verify if such a link already exists if (!workItem.Links.OfType<ExternalLink>() .Any(l => l.LinkedArtifactUri == changeSet.ArtifactUri.AbsoluteUri)) { // place the next logic here } If a link does not exist, then we can add a new link: changesetLink.Comment = "Change set " + $"'{changeSet.ChangesetId}'" + " auto linked by a server plugin"; workItem.Links.Add(changesetLink); workItem.Save(); resultMessage += $"Linked CS:{changeSet.ChangesetId} " + $"to WI:{workItem.Id}"; We just have the extra bit here so as to get the last 25 change sets. If you were using this for production, you would probably want to store the last change set that you processed and then get history up until that point, but I don't think it's needed to illustrate this sample. Then, after getting the list of change sets, we basically process everything 100 percent as before. We check whether there is a comment and whether that comment contains a hash number that we can try linking to a changeSet function. We then check whether a workItem function exists for the number that we found. Next, we add a link to the work item from the changeSet function. Then, for each link we add to the overall resultMessage string so that when we look at the results from our job running, we can see which links were added automatically for us. As you can see, with this approach, we don't interfere with the check-in itself but rather process this out-of-hand way of linking changeSet to work with items at a later stage. Deploying our TFS Job Deploying the code is very simple; change the project's Output type to Class Library. This can be done by going to the project properties, and then in the Application tab, you will see an Output type drop-down list. Now, build your project. Then, copy the TfsJobSample.dll and TfsJobSample.pdb output files to the scheduled job plugins folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgentPlugins. Unfortunately, simply copying the files into this folder won't make your scheduled job automatically installed, and the reason for this is that as part of the interface of the scheduled job, you don't specify when to run your job. Instead, you register the job as a separate step. Change Output type back to Console Application option for the next step. You can, and should, split the TFS job from its installer into different projects, but in our sample, we'll use the same one. Registering, queueing, and deregistering a TFS Job If you try install the job the way you used to in TFS 2013, you will now get the TF400444 error: TF400444: The creation and deletion of jobs is no longer supported. You may only update the EnabledState or Schedule of a job. Failed to create, delete or update job id 5a7a01e0-fff1-44ee-88c3-b33589d8d3b3 This is because they have made some changes to the job service, for security reasons, and these changes prevent you from using the Client Object Model. You are now forced to use the Server Object Model. The code that you have to write is slightly more complicated and requires you to copy your executable to multiple locations to get it working properly. Place all of the following code in your program.cs file inside the main method. We start off by getting some arguments that are passed through to the application, and if we don't get at least one argument, we don't continue: #region Collect commands from the args if (args.Length != 1 && args.Length != 2) { Console.WriteLine("Usage: TfsJobSample.exe <command "+ "(/r, /i, /u, /q)> [job id]"); return; } string command = args[0]; Guid jobid = Guid.Empty; if (args.Length > 1) { if (!Guid.TryParse(args[1], out jobid)) { Console.WriteLine("Job Id not a valid Guid"); return; } } #endregion We then wrap all our logic in a try catch block, and for our catch, we only write the exception that occurred: try { // place logic here } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Place the next steps inside the try block, unless asked to do otherwise. As part of using the Server Object Model, you'll need to create a DeploymentServiceHost. This requires you to have a connection string to the TFS Configuration database, so make sure that the connection string set in the following is valid for you. We also need some other generic path information, so we'll mimic what we could expect the job agents' paths to be: #region Build a DeploymentServiceHost string databaseServerDnsName = "localhost"; string connectionString = $"Data Source={databaseServerDnsName};"+ "Initial Catalog=TFS_Configuration;Integrated Security=true;"; TeamFoundationServiceHostProperties deploymentHostProperties = new TeamFoundationServiceHostProperties(); deploymentHostProperties.HostType = TeamFoundationHostType.Deployment | TeamFoundationHostType.Application; deploymentHostProperties.Id = Guid.Empty; deploymentHostProperties.PhysicalDirectory = @"C:Program FilesMicrosoft Team Foundation Server 14.0"+ @"Application TierTFSJobAgent"; deploymentHostProperties.PlugInDirectory = $@"{deploymentHostProperties.PhysicalDirectory}Plugins"; deploymentHostProperties.VirtualDirectory = "/"; ISqlConnectionInfo connInfo = SqlConnectionInfoFactory.Create(connectionString, null, null); DeploymentServiceHost host = new DeploymentServiceHost(deploymentHostProperties, connInfo, true); #endregion Now that we have a TeamFoundationServiceHost function, we are able to create a TeamFoundationRequestContext function . We'll need it to call methods such as UpdateJobDefinitions, which adds and/or removes our job, and QueryJobDefinition, which is used to queue our job outside of any schedule: using (TeamFoundationRequestContext requestContext = host.CreateSystemContext()) { TeamFoundationJobService jobService = requestContext.GetService<TeamFoundationJobService>() // place next logic here } We then create a new TeamFoundationJobDefinition instance with all of the information that we want for our TFS job, including the name, schedule, and enabled state: var jobDefinition = new TeamFoundationJobDefinition( "Comments to Change Set Links Job", "TfsJobSample.TfsCommentsToChangeSetLinksJob"); jobDefinition.EnabledState = TeamFoundationJobEnabledState.Enabled; jobDefinition.Schedule.Add(new TeamFoundationJobSchedule { ScheduledTime = DateTime.Now, PriorityLevel = JobPriorityLevel.Normal, Interval = 300, }); Once we have the job definition, we check what the command was and then execute the code that will relate to that command. For the /r command, we will just run our TFS job outside of the TFS job agent: if (command == "/r") { string resultMessage; new TfsCommentsToChangeSetLinksJob().Run(requestContext, jobDefinition, DateTime.Now, out resultMessage); } For the /i command, we will install the TFS job: else if (command == "/i") { jobService.UpdateJobDefinitions(requestContext, null, new[] { jobDefinition }); } For the /u command, we will uninstall the TFS Job: else if (command == "/u") { jobService.UpdateJobDefinitions(requestContext, new[] { jobid }, null); } Finally, with the /q command, we will queue the TFS job to be run inside the TFS job agent and outside of its schedule: else if (command == "/q") { jobService.QueryJobDefinition(requestContext, jobid); } Now that we have this code in the program.cs file, we need to compile the project and then copy TfsJobSample.exe and TfsJobSample.pdb to the TFS Tools folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Tools. Now open a cmd window as an administrator. Change the directory to the Tools folder and then run your application with a /i command, as follows: Installing the TFS Job Now, you have successfully installed the TFS Job. To uninstall it or force it to be queued, you will need the job ID. But basically you have to run /u with the job ID to uninstall, like this: Uninstalling the TFS Job You will be following the same approach as prior for queuing, simply specifying the /q command and the job ID. How do I know whether my TFS Job is running? The easiest way to check whether your TFS Job is running or not is to check out the job history table in the configuration database. To do this, you will need the job ID (we spoke about this earlier), which you can obtain by running the following query against the TFS_Configuration database: SELECT JobId FROM Tfs_Configuration.dbo.tbl_JobDefinition WITH ( NOLOCK ) WHERE JobName = 'Comments to Change Set Links Job' With this JobId, we will then run the following lines to query the job history: SElECT * FROM Tfs_Configuration.dbo.tbl_JobHistory WITH (NOLOCK) WHERE JobId = '<place the JobId from previous query here>' This will return you a list of results about the previous times the job was run. If you see that your job has a Result of 6 which is extension not found, then you will need to stop and restart the TFS job agent. You can do this by running the following commands in an Administrator cmd window: net stop TfsJobAgent net start TfsJobAgent Note that when you stop the TFS job agent, any jobs that are currently running will be terminated. Also, they will not get a chance to save their state, which, depending on how they were written, could lead to some unexpected situations when they start again. After the agent has started again, you will see that the Result field is now different as it is a job agent that will know about your job. If you prefer browsing the web to see the status of your jobs, you can browse to the job monitoring page (_oi/_jobMonitoring#_a=history), for example, http://gordon-lappy:8080/tfs/_oi/_jobMonitoring#_a=history. This will give you all the data that you can normally query but with nice graphs and grids. Summary In this article, we looked at how to write, install, uninstall, and queue a TFS Job. You learned that the way we used to install TFS Jobs will no longer work for TFS 2015 because of a change in the Client Object Model for security. Resources for Article: Further resources on this subject: Getting Started with TeamCity[article] Planning for a successful integration[article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 6827

article-image-virtualization-0
Packt
16 Sep 2015
16 min read
Save for later

Virtualization

Packt
16 Sep 2015
16 min read
This article by Skanda Bhargav, the author of Troubleshooting Ubuntu Server, deals with virtualization techniques—why virtualization is important and how administrators can install and serve users with services via virtualization. We will learn about KVM, Xen, and Qemu. So sit back and let's take a spin into the virtual world of Ubuntu. (For more resources related to this topic, see here.) What is virtualization? Virtualization is a technique by which you can convert a set of files into a live running machine with an OS. It is easy to set up one machine and much easier to clone and replicate the same machine across hardware. Also, each of the clones can be customized based on requirements. We will look at setting up a virtual machine using Kernel-based Virtual Machine, Xen, and Qemu in the sections that follow. Today, people are using the power of virtualization in different situations and environments. Developers use virtualization in order to have an independent environment in which to safely test and develop applications without affecting other working environments. Administrators are using virtualization to separate services and also commission or decommission services as and when required or requested. By default, Ubuntu supports the Kernel-based Virtual Machine (KVM), which has built-in extensions for AMD and Intel-based processors. Xen and Qemu are the options suggested where you have hardware that does not have extensions for virtualization. libvirt The libvirt library is an open source library that is helpful for interfacing with different virtualization technologies. One small task before starting with libvirt is to check your hardware support extensions for KVM. The command to do so is as follows: kvm-ok You will see a message stating whether or not your CPU supports hardware virtualization. An additional task would be to verify the BIOS settings for virtualization and activate it. Installation Use the following command to install the package for libvirt: sudo apt-get install kvm libvirt-bin Next, you will need to add the user to the group libvirt. This will ensure that user gets additional options for networking. The command is as follows: sudo adduser $USER libvirtd We are now ready to install a guest OS. Its installation is very similar to that of installing a normal OS on the hardware. If your virtual machine needs a graphical user interface (GUI), you can make use of an application virt-viewer and connect using VNC to the virtual machine's console. We will be discussing the virt-viewer and its uses in the later sections of this article. virt-install virt-install is a part of the python-virtinst package. The command to install this package is as follows: sudo apt-get install python-virtinst One of the ways of using virt-install is as follows: sudo virt-install -n new_my_vm -r 256 -f new_my_vm.img -s 4 -c jeos.iso --accelerate --connect=qemu:///system --vnc --noautoconsole -v Let's understand the preceding command part by part: -n: This specifies the name of virtual machine that will be created -r: This specifies the RAM amount in MBs -f: This is the path for the virtual disk -s: This specifies the size of the virtual disk -c: This is the file to be used as virtual CD, but it can be an .iso file as well --accelerate: This is used to make use of kernel acceleration technologies --vnc: This exports the guest console via vnc --noautoconsole: This disables autoconnect for the virtual machine console -v: This creates a fully virtualized guest Once virt-install is launched, you may connect to console with virt-viewer utility from remote connections or locally using GUI. Use to wrap long text to next line. virt-clone One of the applications to clone a virtual machine to another is virt-clone. Cloning is a process of creating an exact replica of the virtual machine that you currently have. Cloning is helpful when you need a lot of virtual machines with same configuration. Here is an example of cloning a virtual machine: sudo virt-clone -o my_vm -n new_vm_clone -f /path/to/ new_vm_clone.img --connect=qemu:///sys Let's understand the preceding command part by part: -o: This is the original virtual machine that you want to clone -n: This is the new virtual machine name -f: This is the new virtual machine's file path --connect: This specifies the hypervisor to be used Managing the virtual machine Let's see how to manage the virtual machine we installed using virt. virsh Numerous utilities are available for managing virtual machines and libvirt; virsh is one such utility that can be used via command line. Here are a few examples: The following command lists the running virtual machines: virsh -c qemu:///system list The following command starts a virtual machine: virsh -c qemu:///system start my_new_vm The following command starts a virtual machine at boot: virsh -c qemu:///system autostart my_new_vm The following command restarts a virtual machine: virsh -c qemu:///system reboot my_new_vm You can save the state of virtual machine in a file. It can be restored later. Note that once you save the virtual machine, it will not be running anymore. The following command saves the state of the virtual machine: virsh -c qemu://system save my_new_vm my_new_vm-290615.state The following command restores a virtual machine from saved state: virsh -c qemu:///system restore my_new_vm-290615.state The following command shuts down a virtual machine: virsh -c qemu:///system shutdown my_new_vm The following command mounts a CD-ROM in the virtual machine: virsh -c qemu:///system attach-disk my_new_vm /dev/cdrom /media/cdrom The virtual machine manager A GUI-type utility for managing virtual machines is virt-manager. You can manage both local and remote virtual machines. The command to install the package is as follows: sudo apt-get install virt-manager The virt-manager works on a GUI environment. Hence, it is advisable to install it on a remote machine other than the production cluster, as production cluster should be used for doing the main tasks. The command to connect the virt-manager to a local server running libvirt is as follows: virt-manager -c qemu:///system If you want to connect the virt-manager from a different machine, then first you need to have SSH connectivity. This is required as libvirt will ask for a password on the machine. Once you have set up passwordless authentication, use the following command to connect manager to server: virt-manager -c qemu+ssh://virtnode1.ubuntuserver.com/system Here, the virtualization server is identified with the hostname ubuntuserver.com. The virtual machine viewer A utility for connecting to your virtual machine's console is virt-viewer. This requires a GUI to work with the virtual machine. Use the following command to install virt-viewer: sudo apt-get install virt-viewer Now, connect to your virtual machine console from your workstation using the following command: virt-viewer -c qemu:///system my_new_vm You may also connect to a remote host using SSH passwordless authentication by using the following command: virt-viewer -c qemu+ssh://virtnode4.ubuntuserver.com/system my_new_vm JeOS JeOS, short for Just Enough Operation System, is pronounced as "Juice" and is an operating system in the Ubuntu flavor. It is specially built for running virtual applications. JeOS is no longer available as a downloadable ISO CD-ROM. However, you can pick up any of the following approaches: Get a server ISO of the Ubuntu OS. While installing, hit F4 on your keyboard. You will see a list of items and select the one that reads Minimal installation. This will install the JeOS variant. Build your own copy with vmbuilder from Ubuntu. The kernel of JeOS is specifically tuned to run in virtual environments. It is stripped off of the unwanted packages and has only the base ones. JeOS takes advantage of the technological advancement in VMware products. A powerful combination of limited size with performance optimization is what makes JeOS a preferred OS over a full server OS in a large virtual installation. Also, with this OS being so light, the updates and security patches will be small and only limited to this variant. So, the users who are running their virtual applications on the JeOS will have less maintenance to worry about compared to a full server OS installation. vmbuilder The second way of getting the JeOS is by building your own copy of Ubuntu; you need not download any ISO from the Internet. The beauty of vmbuilder is that it will get the packages and tools based on your requirements. Then, build a virtual machine with these and the whole process is quick and easy. Essentially, vmbuilder is a script that will automate the process of creating a virtual machine, which can be easily deployed. Currently, the virtual machines built with vmbuilder are supported on KVM and Xen hypervisors. Using command-line arguments, you can specify what additional packages you require, remove the ones that you feel aren't necessary for your needs, select the Ubuntu version, and do much more. Some developers and admins contributed to the vmbuilder and changed the design specifics, but kept the commands same. Some of the goals were as follows: Reusability by other distributions Plugin feature added for interactions, so people can add logic for other environments A web interface along with CLI for easy access and maintenance Setup Firstly, we will need to set up libvirt and KVM before we use vmbuilder. libvirt was covered in the previous section. Let's now look at setting up KVM on your server. We will install some additional packages along with the KVM package, and one of them is for enabling X server on the machine. The command that you will need to run on your Ubuntu server is as follows: sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils The output of this command will be as follows: Let's look at what each of the packages mean: libvirt-bin: This is used by libvirtd for administration of KVM and Qemu qemu-kvm: This runs in the background ubuntu-vm-builder: This is a tool for building virtual machines from the command line bridge-utils: This enables networking for various virtual machines Adding users to groups You will have to add the user to the libvirtd command; this will enable them to run virtual machines. The command to add the current user is as follows: sudo adduser `id -un` libvirtd The output is as follows:   Installing vmbuilder Download the latest vmbuilder called python-vm-builder. You may also use the older ubuntu-vm-builder, but there are slight differences in the syntax. The command to install python-vm-builder is as follows: sudo apt-get install python-vm-builder The output will be as follows:   Defining the virtual machine While defining the virtual machine that you want to build, you need to take care of the following two important points: Do not assume that the enduser will know the technicalities of extending the disk size of virtual machine if the need arises. Either have a large virtual disk so that the application can grow or document the process to do so. However, it would be better to have your data stored in an external storage device. Allocating RAM is fairly simple. But remember that you should allocate your virtual machine an amount of RAM that is safe to run your application. To check the list of parameters that vmbuilder provides, use the following command: vmbuilder ––help   The two main parameters are virtualization technology, also known as hypervisor, and targeted distribution. The distribution we are using is Ubuntu 14.04, which is also known as trusty because of its codename. The command to check the release version is as follows: lsb_release -a The output is as follows:   Let's build a virtual machine on the same version of Ubuntu. Here's an example of building a virtual machine with vmbuilder: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system Now, we will discuss what the parameters mean: --suite: This specifies which Ubuntu release we want the virtual machine built on --flavour: This specifies which virtual kernel to use to build the JeOS image --arch: This specifies the processor architecture (64 bit or 32 bit) -o: This overwrites the previous version of the virtual machine image --libvirt: This adds the virtual machine to the list of available virtual machines Now that we have created a virtual machine, let's look at the next steps. JeOS installation We will examine the settings that are required to get our virtual machine up and running. IP address A good practice for assigning IP address to the virtual machines is to set a fixed IP address, usually from the private pool. Then, include this info as part of the documentation. We will define an IP address with following parameters: --ip (address): This is the IP address in dotted form --mask (value): This is the IP mask in dotted form (default is 255.255.255.0) --net (value): This is the IP net address (default is X.X.X.0) --bcast (value): This is the IP broadcast (default is X.X.X.255) --gw (address): This is the gateway address (default is X.X.X.1) --dns (address): This is the name server address (default is X.X.X.1) Our command looks like this now: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 You may have noticed that we have assigned only the IP, and all others will take the default value. Enabling the bridge We will have to enable the bridge for our virtual machines, as various remote hosts will have to access the applications. We will configure libvirt and modify the vmbuilder template to do so. First, create the template hierarchy and copy the default template into this folder: mkdir -p VMBuilder/plugins/libvirt/templates cp /etc/vmbuilder/libvirt/* VMBuilder/plugins/libvirt/templates/ Use your favorite editor and modify the following lines in the VMBuilder/plugins/libvirt/templates/libvirtxml.tmpl file: <interface type='network'> <source network='default'/> </interface> Replace these lines with the following lines: <interface type='bridge'> <source bridge='br0'/> </interface>   Partitions You have to allocate partitions to applications for their data storage and working. It is normal to have a separate storage space for each application in /var. The command provided by vmbuilder for this is --part: --part PATH vmbuilder will read the file from the PATH parameter and consider each line as a separate partition. Each line has two entries, mountpoint and size, where size is defined in MBs and is the maximum limit defined for that mountpoint. For this particular exercise, we will create a new file with name vmbuilder.partition and enter the following lines for creating partitions: root 6000 swap 4000 --- /var 16000 Also, please note that different disks are identified by the delimiter ---. Now, the command should be like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition Use to wrap long text to the next line. Setting the user and password We have to define a user and a password in order for the user to log in to the virtual machine after startup. For now, let's use a generic user identified as user and the password password. We can ask user to change the password after first login. The following parameters are used to set the username and password: --user (username): This sets the username (default is ubuntu) --name (fullname): This sets a name for the user (default is ubuntu) --pass (password): This sets the password for the user (default is ubuntu) So, now our command will be as follows: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password Final steps in the installation – first boot There are certain things that will need to be done at the first boot of a machine. We will install openssh-server at first boot. This will ensure that each virtual machine has a key, which is unique. If we had done this earlier in the setup phase, all virtual machines would have been given the same key; this might have posed a security issue. Let's create a script called first_boot.sh and run it at the first boot of every new virtual machine: # This script will run the first time the virtual machine boots # It is run as root apt-get update apt-get install -qqy --force-yes openssh-server Then, add the following line to the command line: --firstboot first_boot.sh Final steps in the installation – first login Remember we had specified a default password for the virtual machine. This means all the machines where this image will be used for installation will have the same password. We will prompt the user to change the password at first login. For this, we will use a shell script named first_login.sh. Add the following lines to the file: # This script is run the first time a user logs in. echo "Almost at the end of setting up your machine" echo "As a security precaution, please change your password" passwd Then, add the parameter to your command line: --firstlogin first_login.sh Auto updates You can make your virtual machine update itself at regular intervals. To enable this feature, add a package named unattended-upgrades to the command line: --addpkg unattended-upgrades ACPI handling ACPI handling will enable your virtual machine to take care of shutdown and restart events that are received from a remote machine. We will install the acipd package for the same: --addpkg acipd The complete command So, the final command with the parameters that we discussed previously would look like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password --firstboot first_boot.sh --firstlogin first_login.sh --addpkg unattended-upgrades --addpkg acipd   Summary In this article, we discussed various virtualization techniques. We discussed virtualization as well as the tools and packages that help in creating and running a virtual machine. Also, you learned about the ways we can view, manage, connect to, and make use of the applications running on the virtual machine. Then, we saw the lightweight version of Ubuntu that is fine-tuned to run virtualization and applications on a virtual platform. At the later stages of this article, we covered how to build a virtual machine from a command line, how to add packages, how to set up user profiles, and the steps for first boot and first login. Resources for Article: Further resources on this subject: Introduction to OpenVPN [article] Speeding up Gradle builds for Android [article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 1365
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
21 Jul 2015
6 min read
Save for later

WildFly – the Basics

Packt
21 Jul 2015
6 min read
In this article, Luigi Fugaro, author of the book Wildfly Cookbook says that the JBoss.org community is a huge community, where people all over the world develop, test, and document pieces of code. There are a lot of projects in there, not just JBoss AS, which is now WildFly. I can mention a few: Infinispan, Undertow, PicketLink, Arquillian, HornetQ, RESTeasy, AeroGear, and Vert.X. For a complete list of all projects, visit http://www.jboss.org/projects/. (For more resources related to this topic, see here.) Despite marketing reasons, as there is no preferred project, the community wanted to change the name of the JBoss AS project to something different, which would not collide with the community name. There was also another reason, which was about the JBoss Red Hat supported version named JBoss Enterprise Application Platform (EAP). This was another point toward replacing the JBoss AS name. Software prerequisites WildFly runs on top of the Java platform. It needs at least Java Runtime Environment (JRE) version 1.7 to run (further reference to version 1.7 and 7 should be considered equal; the same applies to 1.8 and 8 as well), but it also works perfectly with latest JRE version 8. As we will also need to compile and build Java web applications, we will need the Java Development Kit (JDK), which gives the necessary tools to work with the Java source code. In the JDK panorama, we can find the Oracle JDK, developed and maintained by Oracle, and OpenJDK, which relies on contributions by the community. Nevertheless, since April 2015, Oracle no longer posts updates of Java SE 7 to its public download sites as mentioned at http://www.oracle.com/technetwork/java/javase/downloads/eol-135779.html. Also, bear in mind that Java Critical Patch Update is released on a quarterly basis; therefore, for reasons of stability and feature support, we will use the Oracle JDK 8, which is freely available for download at http://www.oracle.com/technetwork/java/javase/downloads/index.html. While writing the book, the latest stable Oracle JDK is version 1.8.0_31 (8u31). From here on, every reference to Java Virtual Machine (JVM), Java, JRE, and JDK will be intended to be to Oracle JDK 1.8.0_31. To keep things simple, if you don't mind, use that same version. In addition to the JDK, we will need Apache Maven 3, which is a build tool for Java projects. It is freely available for download at http://maven.apache.org/download.cgi. A generic download link can be found at http://www.us.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz. Downloading and installing WildFly In this article, you will learn how to get and install WildFly. As always in the open source world, you can do the same thing in different ways. WildFly can be installed using your preferred software manager or by downloading the bundle provided by the http://wildfly.org/ site. Going as per the JDK, we will choose the second way. Getting ready Just open your favorite browser and point it to http://wildfly.org/downloads/. You should see a page similar to this one: WildFly's download page Now download the latest version into our WFC folder. How to do it… Once the download is complete, open a terminal and extract its content into the WFC folder, executing the following commands: $ cd ~/WFC && tar zx wildfly-9.0.0.Beta2.tar.gz The preceding command will first point to our WildFly Cookbook folder; it will then extract the WildFly archive from it. Listing our WFC folder, we should find the newly created WildFly folder named wildfly-9.0.0.Beta2. To better remember and handle WildFly's installation directory, rename it to wildfly, as follows: $ cd ~/WFC && mv wildfly-9.0.0.Beta2 wildfly By the way, WildFly can be also installed using YUM, Fedora's traditional software manager. In production environment, you will not place the WildFly installation directory in the home folder of a specific user; rather, you will place it in different paths, relative to the context you are working on. Now, we need to create the JBOSS_HOME environment variable, which is used by WildFly itself as base directory when it starts up (in the feature release, this was updated to WILDFLY_HOME). We will also create the WILDFLY_HOME environment variable, which we will use through the whole book to reference WildFly's installation directory. Thus, open the .bash_profile file, placed in your home folder, with your favorite text editor, and add the following directives: export JBOSS_HOME=~/WFC/wildfly export WILDFLY_HOME=$JBOSS_HOME For the changes to take effect, you can either log out and log back in, or just issue the following command: $ source ~/.bash_profile Your .bash_profile file should look as follows: Understanding the WildFly directory overview Now that we have finished installing WildFly, let's look into its folders. How to do it… Open your terminal and run the following command: $ cd $WILDFLY_HOME $ pwd && ls -la The output of your commands should be similar to the following image: WildFly's folders overview How it works The preceding image depicts WildFly's folder on the filesystem. Each is outlined in the following table: Folder's name Description appclient These are configuration files, deployment content, and writable areas used by the application client container run from this installation. bin These are the startup scripts; startup configuration files, and various command-line utilities, such as Vault, add-user, and Java diagnostic report available for the Unix and Windows environments. bin/client This contains a client JAR for use by non maven-based clients. docs/schema These are XML schema definition files. docs/examples/configs These are example configuration files representing specific use cases. domain These are configuration files, deployment content, and writable areas used by the domain mode processes run from this installation. modules WildFly is based on a modular class-loading architecture. The various modules used in the server are stored here. standalone These are configuration files, deployment content, and writable areas used by the single standalone server run from this installation. welcome-content This is the default Welcome Page content. In the preceding table, I've emphasized the domain and standalone folders, which are those that determine in which mode WildFly will run: standalone or domain. From here on, whenever mentioned, WildFly's home will be intended as $WILDFLY_HOME. Summary In this article, we covered the software prerequisites of WildFly, downloading and installing WildFly, and an overview of WildFly's folder structure. Resources for Article: Further resources on this subject: Various subsystem configurations [article] Creating a JSF composite component [article] Introducing PrimeFaces [article]
Read more
  • 0
  • 0
  • 2222

article-image-getting-started-nginx
Packt
20 Jul 2015
10 min read
Save for later

Getting Started with Nginx

Packt
20 Jul 2015
10 min read
In this article by the author, Valery Kholodkov, of the book, Nginx Essentials, we learn to start digging a bit deeper into Nginx, we will quickly go through most common distributions that contain prebuilt packages for Nginx. Installing Nginx Before you can dive into specific features of Nginx, you need to learn how to install Nginx on your system. It is strongly recommended that you use prebuilt binary packages of Nginx if they are available in your distribution. This ensures best integration of Nginx with your system and reuse of best practices incorporated into the package by the package maintainer. Prebuilt binary packages of Nginx automatically maintain dependencies for you and package maintainers are usually fast to include security patches, so you don't get any complaints from security officers. In addition to that, the package usually provides a distribution-specific startup script, which doesn't come out of the box. Refer to your distribution package directory to find out if you have a prebuilt package for Nginx. Prebuilt Nginx packages can also be found under the download link on the official Nginx.org site. Installing Nginx on Ubuntu The Ubuntu Linux distribution contains a prebuilt package for Nginx. To install it, simply run the following command: $ sudo apt-get install nginx The preceding command will install all the required files on your system, including the logrotate script and service autorun scripts. The following table describes the Nginx installation layout that will be created after running this command as well as the purpose of the selected files and folders: Description Path/Folder Nginx configuration files /etc/nginx Main configuration file /etc/nginx/nginx.conf Virtual hosts configuration files (including default one) /etc/nginx/sites-enabled Custom configuration files /etc/nginx/conf.d Log files (both access and error log) /var/log/nginx Temporary files /var/lib/nginx Default virtual host files /usr/share/nginx/html Default virtual host files will be placed into /usr/share/nginx/html. Please keep in mind that this directory is only for the default virtual host. For deploying your web application, use folders recommended by Filesystem Hierarchy Standard (FHS). Now you can start the Nginx service with the following command: $ sudo service nginx start This will start Nginx on your system. Alternatives The prebuilt Nginx package on Ubuntu has a number of alternatives. Each of them allows you to fine tune the Nginx installation for your system. Installing Nginx on Red Hat Enterprise Linux or CentOS/Scientific Linux Nginx is not provided out of the box in Red Hat Enterprise Linux or CentOS/Scientific Linux. Instead, we will use the Extra Packages for Enterprise Linux (EPEL) repository. EPEL is a repository that is maintained by Red Hat Enterprise Linux maintainers, but contains packages that are not a part of the main distribution for various reasons. You can read more about EPEL at https://fedoraproject.org/wiki/EPEL. To enable EPEL, you need to download and install the repository configuration package: For RHEL or CentOS/SL 7, use the following link: http://download.fedoraproject.org/pub/epel/7/x86_64/repoview/epel-release.html For RHEL/CentOS/SL 6 use the following link: http://download.fedoraproject.org/pub/epel/6/i386/repoview/epel-release.html If you have a newer/older RHEL version, please take a look at the How can I use these extra packages? section in the original EPEL wiki at the following link: https://fedoraproject.org/wiki/EPEL Now that you are ready to install Nginx, use the following command: # yum install nginx The preceding command will install all the required files on your system, including the logrotate script and service autorun scripts. The following table describes the Nginx installation layout that will be created after running this command and the purpose of the selected files and folders: Description Path/Folder Nginx configuration files /etc/nginx Main configuration file /etc/nginx/nginx.conf Virtual hosts configuration files (including default one) /etc/nginx/conf.d Custom configuration files /etc/nginx/conf.d Log files (both access and error log) /var/log/nginx Temporary files /var/lib/nginx Default virtual host files /usr/share/nginx/html Default virtual host files will be placed into /usr/share/nginx/html. Please keep in mind that this directory is only for the default virtual host. For deploying your web application, use folders recommended by FHS. By default, the Nginx service will not autostart on system startup, so let's enable it. Refer to the following table for the commands corresponding to your CentOS version: Function Cent OS 6 Cent OS 7 Enable Nginx startup at system startup chkconfig nginx on systemctl enable nginx Manually start Nginx service nginx start systemctl start nginx Manually stop Nginx service nginx stop systemctl start nginx Installing Nginx from source files Traditionally, Nginx is distributed in the source code. In order to install Nginx from the source code, you need to download and compile the source files on your system. It is not recommended that you install Nginx from the source code. Do this only if you have a good reason, such as the following scenarios: You are a software developer and want to debug or extend Nginx You feel confident enough to maintain your own package A package from your distribution is not good enough for you You want to fine-tune your Nginx binary. In either case, if you are planning to use this way of installing for real use, be prepared to sort out challenges such as dependency maintenance, distribution, and application of security patches. In this section, we will be referring to the configuration script. Configuration script is a shell script similar to one generated by autoconf, which is required to properly configure the Nginx source code before it can be compiled. This configuration script has nothing to do with the Nginx configuration file that we will be discussing later. Downloading the Nginx source files The primary source for Nginx for an English-speaking audience is Nginx.org. Open https://nginx.org/en/download.html in your browser and choose the most recent stable version of Nginx. Download the chosen archive into a directory of your choice (/usr/local or /usr/src are common directories to use for compiling software): $ wget -q http://nginx.org/download/nginx-1.7.9.tar.gz Extract the files from the downloaded archive and change to the directory corresponding to the chosen version of Nginx: $ tar xf nginx-1.7.9.tar.gz$ cd nginx-1.7.9 To configure the source code, we need to run the ./configure script included in the archive: $ ./configurechecking for OS+ Linux 3.13.0-36-generic i686checking for C compiler ... found+ using GNU C compiler[...] This script will produce a lot of output and, if successful, will generate a Makefile file for the source files. Notice that we showed the non-privileged user prompt $ instead of the root # in the previous command lines. You are encouraged to configure and compile software as a regular user and only install as root. This will prevent a lot of problems related to access restriction while working with the source code. Troubleshooting The troubleshooting step, although very simple, has a couple of common pitfalls. The basic installation of Nginx requires the presence of OpenSSL and Perl-compatible Regex (PCRE) developer packages in order to compile. If these packages are not properly installed or not installed in locations where the Nginx configuration script is able to locate them, the configuration step might fail. Then, you have to choose between disabling the affected Nginx built-in modules (rewrite or SSL, installing required packages properly, or pointing the Nginx configuration script to the actual location of those packages if they are installed. Building Nginx You can build the source files now using the following command: $ make You'll see a lot of output on compilation. If build is successful, you can install the Nginx file on your system. Before doing that, make sure you escalate your privileges to the super user so that the installation script can install the necessary files into the system areas and assign necessary privileges. Once successful, run the make install command: # make install The preceding command will install all the necessary files on your system. The following table lists all locations of the Nginx files that will be created after running this command and their purposes: Description Path/Folder Nginx configuration files /usr/local/nginx/conf Main configuration file /usr/local/nginx/conf/nginx.conf Log files (both access and error log) /usr/local/nginx/logs Temporary files /usr/local/nginx Default virtual host files /usr/local/nginx/html Unlike installations from prebuilt packages, installation from source files does not harness Nginx folders for the custom configuration files or virtual host configuration files. The main configuration file is also very simple in its nature. You have to take care of this yourself. Nginx must be ready to use now. To start Nginx, change your working directory to the /usr/local/nginx directory and run the following command: # sbin/nginx This will start Nginx on your system with the default configuration. Troubleshooting This stage works flawlessly most of the time. A problem can occur in the following situations: You are using nonstandard system configuration. Try to play with the options in the configuration script in order to overcome the problem. You compiled in third-party modules and they are out of date or not maintained. Switch off third-party modules that break your build or contact the developer for assistance. Copying the source code configuration from prebuilt packages Occasionally you might want to amend Nginx binary from a prebuilt packages with your own changes. In order to do that you need to reproduce the build tree that was used to compile Nginx binary for the prebuilt package. But how would you know what version of Nginx and what configuration script options were used at the build time? Fortunately, Nginx has a solution for that. Just run the existing Nginx binary with the -V command-line option. Nginx will print the configure-time options. This is shown in the following: $ /usr/sbin/nginx -Vnginx version: nginx/1.4.6 (Ubuntu)built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1)TLS SNI support enabledconfigure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' … Using the output of the preceding command, reproduce the entire build environment, including the Nginx source tree of the corresponding version and modules that were included into the build. Here, the output of the Nginx -V command is trimmed for simplicity. In reality, you will be able to see and copy the entire command line that was passed to the configuration script at the build time. You might even want to reproduce the version of the compiler used in order to produce a binary-identical Nginx executable file (we will discuss this later when discussing how to troubleshoot crashes). Once this is done, run the ./configure script of your Nginx source tree with options from the output of the -V option (with necessary alterations) and follow the remaining steps of the build procedure. You will get an altered Nginx executable on the objs/ folder of the source tree. Summary Here, you learned how to install Nginx from a number of available sources, the structure of Nginx installation and the purpose of various files, the elements and structure of the Nginx configuration file, and how to create a minimal working Nginx configuration file. You also learned about some best practices for Nginx configuration.
Read more
  • 0
  • 0
  • 2911

article-image-tuning-server-performance-memory-management-and-swap
Packt
24 Jun 2015
7 min read
Save for later

Tuning server performance with memory management and swap

Packt
24 Jun 2015
7 min read
In this article, by Jonathan Hobson, the author of Troubleshooting CentOS, we will learn about memory management, swap, and swappiness. (For more resources related to this topic, see here.) A deeper understanding of the underlying active processes in CentOS 7 is an essential skill for any troubleshooter. From high load averages to slow response times, system overloads to dead and dying processes, there comes a time when every server may start to feel sluggish, act impoverished, or fail to respond, and as a consequence, it will require your immediate attention. Regardless of how you look at it, the question of memory usage remains critical to the life cycle of a system, and whether you are maintaining system health or troubleshooting a particular service or application, you will always need to remember that the use of memory is a critical resource to your system. For this reason, we will begin by calling the free command in the following way: # free -m The main elements of the preceding command will look similar to this:          Total   used   free   shared   buffers   cached Mem:     1837     274   1563         8         0       108 -/+ buffers/cache: 164   1673 Swap:     2063       0   2063 In the example shown, I have used the -m option to ensure that the output is formatted in megabytes. This makes it easier to read, but for the sake of troubleshooting, rather than trying to understand every numeric value shown, let's reduce the scope of the original output to highlight the relevant area of concern: -/+ buffers/cache: 164   1673 The importance of this line is based on the fact that it accounts for the associated buffers and caches to illustrate what memory is currently being used and what is held in reserve. Where the first value indicates how much memory is being used, the second value tells us how much memory is available to our applications. In the example shown, this instance translates into 164 MB of used memory and 1673 MB of available memory. Bearing this in mind, let me draw your attention to the final line in order that we can examine the importance of swap: Swap:     2063       0   2063 Swapping typically occurs when memory usage is impacting performance. As we can see from the preceding example, the first value tells us that there is a total amount of system swap set at 2063 MB, with the second value indicating how much swap is being used (0 MB), while the third value shows the amount of swap that is still available to the system as a whole (2063 MB). So yes, based on the example data shown here, we can conclude that this is a healthy system, and no swap is being used, but while we are here, let's use this time to discover more about the swap space on your system. To begin, we will revisit the contents of the proc directory and reveal the total and used swap size by typing the following command: # cat /proc/swaps Assuming that you understand the output shown, you should then investigate the level of swappiness used by your system with the following command: # cat /proc/sys/vm/swappiness Having done this, you will now see a numeric value between the ranges of 0-100. The numeric value is a percentage and it implies that, if your system has a value of 30, for example, it will begin to use swap memory at 70 percent occupation of RAM. The default for all Linux systems is usually set with a notional value between 30 to 60, but you can use either of the following commands to temporarily change and modify the swappiness of your system. This can be achieved by replacing the value of X with a numeric value from 1-100 by typing: # echo X > /proc/sys/vm/swappiness Or more specifically, this can also be achieved with: # sysctl -w vm.swappiness=X If you change your mind at any point, then you have two options in order to ensure that no permanent changes have been made. You can either repeat one of the preceding two commands and return the original values, or issue a full system reboot. On the other hand, if you want to make the change persist, then you should edit the /etc/sysctl.conf file and add your swappiness preferences in the following way: vm.swappiness=X When complete, simply save and close the file to ensure that the changes take effect. The level of swappiness controls the tendency of the kernel to move a process out of the physical RAM on to a swap disk. This is memory management at work, but it is important to realize that swapping will not occur immediately, as the level of swappiness is actually expressed as a percentage value. For this reason, the process of swapping should be viewed more as a measurement of preference when using the cache, and as every administrator will know, there is an option for you to clear the swap by using the commands swapoff -a and swapon -a to achieve the desired result. The golden rule is to realize that a system displaying a level of swappiness close to the maximum value (100) will prefer to begin swapping inactive pages. This is because a value of 100 is a representative of 0 percent occupation of RAM. By comparison, the closer your system is to the lowest value (0), the less likely swapping is to occur as 0 is representative of 100 percent occupation of RAM. Generally speaking, we would all probably agree that systems with a very large pool of RAM would not benefit from aggressive swapping. However, and just to confuse things further, let's look at it in a different way. We all know that a desktop computer will benefit from a low swappiness value, but in certain situations, you may also find that a system with a large pool of RAM (running batch jobs) may also benefit from a moderate to aggressive swap in a fashion similar to a system that attempts to do a lot but only uses small amounts of RAM. So, in reality, there are no hard and fast rules; the use of swap should be based on the needs of the system in question rather than looking for a single solution that can be applied across the board. Taking this further, special care and consideration should be taken while making changes to the swapping values as RAM that is not used by an application is used as disk cache. In this situation, by decreasing swappiness, you are actually increasing the chance of that application not being swapped-out, and you are thereby decreasing the overall size of the disk cache. This can make disk access slower. However, if you do increase the preference to swap, then because hard disks are slower than memory modules, it can lead to a slower response time across the overall system. Swapping can be confusing, but by knowing this, we can also appreciate the hidden irony of swappiness. As Newton's third law of motion states, for every action, there is an equal and opposite reaction, and finding the optimum swappiness value may require some additional experimentation. Summary In this article, we learned some basic yet vital commands that help us gauge and maintain server performance with the help of swapiness. Resources for Article: Further resources on this subject: Installing CentOS [article] Managing public and private groups [article] Installing PostgreSQL [article]
Read more
  • 0
  • 0
  • 1458

article-image-building-portable-minecraft-server-lan-parties-park
Andrew Fisher
01 Jun 2015
14 min read
Save for later

Building a portable Minecraft server for LAN parties in the park

Andrew Fisher
01 Jun 2015
14 min read
Minecraft is a lot of fun, especially when you play with friends. Minecraft servers are great but they aren’t very portable and rely on a good Internet connection. What about if you could take your own portable server with you - say to the park - and it will fit inside a lunchbox? This post is about doing just that, building a small, portable minecraft server that you can use to host pop up crafting sessions no matter where you are when the mood strikes.   where shell instructions are provided in this document, they are presented assuming you have relevant permissions to execute them. If you run into permission denied errors then execute using sudo or switch user to elevate your permissions. Bill of Materials The following components are needed. Item QTY Notes Raspberry Pi 2 Model B 1 Older version will be too laggy. Get a new one 4GB MicroSD card with raspbian installed on it 1 The faster the class of SD card the better WiPi wireless USB dongle 1 Note that “cheap” USB dongles often won’t support hostmode so can’t create a network access point. The “official” ones cost more but are known to work. USB Powerbank 1 Make sure it’s designed for charging tablets (ie 2.1A) and the higher capacity the better (5000mAh or better is good). Prerequisites I am assuming you’ve done the following with regards to getting your Raspberry Pi operational. Latest Raspbian is installed and is up to date - run ‘apt-get update && apt-get upgrade’ if unsure. Using raspi-config you have set the correct timezone for your location and you have expanded the file system to fill the SD card. You have wireless configured and you can access your network using wpa_supplicant You’ve configured the Pi to be accessible over SSH and you have a client that allows you do this (eg ssh, putty etc). Setup I’ll break the setup into a few parts, each focussing on one aspect of what we’re trying to do. These are: Getting the base dependencies you need to install everything on the RPi Installing and configuring a minecraft server that will run on the Pi Configuring the wireless access point. Automating everything to happen at boot time. Configure your Raspberry Pi Before running minecraft server on the RPi you will need a few additional packaged than you have probably installed by default. From a command line install the following: sudo apt-get install oracle-java8-jdk git avahi-daemon avahi-utils hostapd dnsmasq screen Java is required for minecraft and building the minecraft packages git will allow you to install various source packages avahi (also known as ZeroConf) will allow other machines to talk to your machine by name rather than IP address (which means you can connect to minecraft.local rather than 192.168.0.1 or similar). dnsmasq allows you to run a DNS server and assign IP addresses to the machines that connect to your minecraft box hostapd uses your wifi module to create a wireless access point (so you can play minecraft in a tent with your friends). Now you have the various components we need, it’s time to start building your minecraft server. Download the script repo To make this as fast as possible I’ve created a repository on Github that has all of the config files in it. Download this using the following commands: mkdir ~/tmp cd ~/tmp git clone https://gist.github.com/f61c89733340cd5351a4.git This will place a folder called ‘mc-config’ inside your ~/tmp directory. Everything will be referenced from there. Get a spigot build It is possible to run Minecraft using the Vanilla Minecraft server however it’s a little laggy. Spigot is a fork of CraftBukkit that seems to be a bit more performance oriented and a lot more stable. Using the Vanilla minecraft server I was experiencing lots of lag issues and crashes, with Spigot these disappeared entirely. The challenge with Spigot is you have to build the server from scratch as it can’t be distributed. This takes a little while on an RPi but is mostly automated. Run the following commands. mkdir ~/tmp/mc-build cd ~/tmp/mc-build wget https://hub.spigotmc.org/jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar java -jar BuildTools.jar --rev 1.8 If you have a dev environment setup on your computer you can do this step locally and it will be a lot faster. The key thing is at the end to put the spigot-1.8.jar and the craftbukkit-1.8.jar files on the RPi in the ~/tmp/mc-build/ directory. You can do this with scp.  Now wait a while. If you’re impatient, open up another SSH connection to your server and configure your access point while the build process is happening. //time passes After about 45 minutes, you should have your own spigot build. Time to configure that with the following commands: cd ~/tmp/mc-config ./configuremc.sh This will run a helper script which will then setup some baseline properties for your server and some plugins to help it be more stable. It will also move the server files to the right spot, configure a minecraft user and set minecraft to run as a service when you boot up. Once that is complete, you can start your server. service minecraft-server start The first time you do this it will take a long time as it has to build out the whole world, create config files and do all sorts of set up things. After this is done the first time however, it will usually only take 10 to 20 seconds to start after this. Administer your server We are using a tool called “screen” to run our minecraft server. Screen is a useful utility that allows you to create a shell and then execute things within it and just connect and detach from it as you please. This is a really handy utility say when you are running something for a long time and you want to detach your ssh session and reconnect to it later - perhaps you have a flaky connection. When the minecraft service starts up it creates a new screen session, and gives it the name “minecraft_server” and runs the spigot server command. The nice thing with this is that once the spigot server stops, the screen will close too. Now if you want to connect to your minecraft server the way you do it is like this: sudo screen -r minecraft_server To leave your server running just hit <CRTL+a> then hit the “d” key. CTRL+A sends an “action” and then “d” sends “detach”. You can keep resuming and detaching like this as much as you like. To stop the server you can do it two ways. The first is to do it manually once you’ve connected to the screen session then type “stop”. This is good as it means you can watch the minecraft server come down and ensure there’s no errors. Alternatively just type: service minecraft-server stop And this actually simulates doing exactly the same thing. Figure: Spigot server command line Connect to your server Once you’ve got your minecraft server running, attempt to connect to it from your normal computer using multiplayer and then direct connect. The machine address will be minecraft.local (unless you changed it to something else). Figure: Server selection Now you have the minecraft server complete you can simply ssh in, run ‘service minecraft-server start’ and your server will come up for you and your friends to play. The next sections will get you portable and automated. Setting up the WiFi host The way I’m going to show you to set up the Raspberry Pi is a little different than other tutorials you’ll see. The objective is that if the RPi can discover an SSID that it is allowed to join (eg your home network) then it should join that. If the RPi can’t discover a network it knows, then it should create it’s own and allow other machines to join it. This means that when you have your minecraft server sitting in your kitchen you can update the OS, download new mods and use it on your existing network. When you take it to the park to have a sunny Crafting session with your friends, the server will create it’s own network that you can all jump onto. Turn off auto wlan management By default the OS will try and do the “right thing” when you plug in a network interface so when you go to create your access point, it will actually try and put it on the wireless network again - not the desired result. To change this make the following modifications to /etc/default/ifplugd change the lines: INTERFACES="all" HOTPLUG_INTERFACES="ALL" to: INTERFACES="eth0" HOTPLUG_INTERFACES="eth0" Configure hostapd Now, stop hostapd and dnsmasq from running at boot. They should only come up when needed so the following commands will make them manual. update-rc.d -f hostapd remove update-rc.d -f dnsmasq remove Next, modify the hostapd daemon file to read the hostapd config from a file. Change the /etc/default/hostapd file to have the line: DAEMON_CONF="/etc/hostapd/hostapd.conf" Now create the /etc/hostapd/hostapd.conf file using this command using the one from the repo. cd ~/tmp/mc-config cp hostapd.conf /etc/hostapd/hostapd.conf If you look at this file you can see we’ve set the SSID of the access point to be “minecraft”, the password to be “qwertyuiop” and it has been set to use wlan0. If you want to change any of these things, feel free to do it. Now you'll probably want to kill your wireless device with ifdown wlan0 Make sure all your other processes are finished if you’re doing this and compiling spigot at the same time (or make sure you’re connected via the wired ethernet as well). Now run hostapd to just check any config errors. hostapd -d /etc/hostapd/hostapd.conf If there are no errors, then background the task (ctrl + z then type ‘bg 1’) and look at ifconfig. You should now have the wlan0 interface up as well as a wlan0.mon interface. If this is all good then you know your config is working for hostapd. Foreground the task (‘fg 1’) and stop the hostapd process (ctrl + c). Configure dnsmasq Now to get dnsmasq running - this is pretty easy. Download the dnsmasq example and put it in the right place using this command: cd ~/tmp/mc-config mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup cp dnsmasq.conf /etc/dnsmasq.conf dnsmasq is set to listen on wlan0 and allocate IP addresses in the range 192.168.40.5 - 192.168.40.40. The default IP address of the server will be 192.168.40.1. That's pretty much all you really need to get dnsmasq configured. Testing the config Now it’s time to test all this configuration. It is probably useful to have your RPi connected to a keyboard and monitor or available over eth0 as this is the most likely point where things may need debugging. The following commands will bring down wlan0, start hostapd, configure your wlan0 interface to a static IP address then start up dnsmasq. ifdown wlan0 service hostapd start ifconfig wlan0 192.168.40.1 service dnsmasq start Assuming you had no errors, you can now connect to your wireless access point from your laptop or phone using the SSID “minecraft” and password “qwertyuiop”. Once you are connected you should be given an IP address and you should be able to ping 192.168.40.1 and shell onto minecraft.local. Congratulations - you’ve now got a Raspberry Pi Wireless Access Point. As an aside, were you to write some iptables rules, you could now route traffic from the wlan0 interface to the eth0 interface and out onto the rest of your wired network - thus turning your RPi into a router. Running everything at boot The final setup task is to make the wifi detection happen at boot time. Most flavours of linux have a boot script called rc.local which is pretty much the final thing to run before giving you your login prompt at a terminal. Download the rc.local file using the following commands. mv /etc/rc.local /etc/rc.local.backup cp rc.local /etc/rc.local chmod a+x /etc/rc.local This script checks to see if the RPi came up on the network. If not it will wait a couple of seconds, then start setting up hostapd and dnsmasq. To test this is all working, modify your /etc/wpa_supplicant/wpa_supplicant.conf file and change the SSID so it’s clearly incorrect. For example if your SSID was “home” change it to “home2”. This way when the RPi boots it won’t find it and the access point will be created. Park Crafting Now you have your RPi minecraft server and it can detect networks and make good choices about when to create it’s own network, the next thing you need to do it make it portable. The new version 2 RPi is more energy efficient, though running both minecraft and a wifi access point is going to use some power. The easiest way to go mobile is to get a high capacity USB powerbank that is designed for charging tablets. They can be expensive but a large capacity one will keep you going for hours. This powerbank is 5000mAh and can deliver 2.1amps over it’s USB connection. Plenty for several hours crafting in the park. Setting this up couldn’t be easier, plug a usb cable into the powerbank and then into the RPi. When you’re done, simply plug the powerbank into your USB charger or computer and charge it up again. If you want something a little more custom then 7.4v lipo batteries with a step down voltage regulator (such as this one from Pololu: https://www.pololu.com/product/2850) connected to the power and ground pins on the RPi works very well. The challenge here is charging the LiPo again, however if you have the means to balance charge lipos then this will probably be a very cheap option. If you connect more than 5V to the RPi you WILL destroy it. There are no protection circuits when you use the GPIO pins. To protect your setup simply put it in a little plastic container like a lunch box. This is how mine travels with me. A plastic lunchbox will protect your server from spilled drinks, enthusiastic dogs and toddlers attracted to the blinking lights. Take your RPi and power supply to the park, along with your laptop and some friends then create your own little wifi hotspot and play some minecraft together in the sun. Going further If you want to take things a little further, here are some ideas: Build a custom enclosure - maybe something that looks like your favorite minecraft block. Using WiringPi (C) or RPI.GPIO (Python), attach an LCD screen that shows the status of your server and who is connected to the minecraft world. Add some custom LEDs to your enclosure then using RaspberryJuice and the python interface to the API, create objects in your gameworld that can switch the LEDs on and off. Add a big red button to your enclosure that when pressed will “nuke” the game world, removing all blocks and leaving only water. Go a little easier and simply use the button to kick everyone off the server when it’s time to go or your battery is getting low. About the Author Andrew Fisher is a creator and destroyer of things that combine mobile web, ubicomp and lots of data. He is a sometime programmer, interaction researcher and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter @ajfisher.
Read more
  • 0
  • 0
  • 8763
article-image-lync-2013-hybrid-and-lync-online
Packt
06 Feb 2015
27 min read
Save for later

Lync 2013 Hybrid and Lync Online

Packt
06 Feb 2015
27 min read
In this article, by the authors, Fabrizio Volpe, Alessio Giombini, Lasse Nordvik Wedø, and António Vargas of the book, Lync Server Cookbook, we will cover the following recipes: Introducing Lync Online Administering with the Lync Admin Center Using Lync Online Remote PowerShell Using Lync Online cmdlets Introducing Lync in a hybrid scenario Planning and configuring a hybrid deployment Moving users to the cloud Moving users back on-premises Debugging Lync Online issues (For more resources related to this topic, see here.) Introducing Lync Online Lync Online is part of the Office 365 offer and provides online users with the same Instant Messaging (IM), presence, and conferencing features that we would expect from an on-premises deployment of Lync Server 2013. Enterprise Voice, however, is not available on Office 365 tenants (or at least, it is available only with limitations regarding both specific Office 365 plans and geographical locations). There is no doubt that forthcoming versions of Lync and Office 365 will add what is needed to also support all the Enterprise Voice features in the cloud. Right now, the best that we are able to achieve is to move workloads, homing a part of our Lync users (the ones with no telephony requirements) in Office 365, while the remaining Lync users are homed on-premises. These solutions might be interesting for several reasons, including the fact that we can avoid the costs of expanding our existing on-premises resources by moving a part of our Lync-enabled users to Office 365. The previously mentioned configuration, which involves different kinds of Lync tenants, is called a hybrid deployment of Lync, and we will see how to configure it and move our users from online to on-premises and vice versa. In this Article, every time we talk about Lync Online and Office 365, we will assume that we have already configured an Office tenant. Administering with the Lync Admin Center Lync Online provides the Lync Admin Center (LAC), a dedicated control panel, to manage Lync settings. To open it, access the Office 365 portal and select Service settings, Lync, and Manage settings in the Lync admin center, as shown in the following screenshot: LAC, if you compare it with the on-premises Lync Control Panel (or with the Lync Management Shell), offers few options. For example, it is not possible to create or delete users directly inside Lync. We will see some of the tasks we are able to perform in LAC, and then, we will move to the (more powerful) Remote PowerShell. There is an alternative path to open LAC. From the Office 365 portal, navigate to Users & Groups | Active Users. Select a user, after which you will see a Quick Steps area with an Edit Lync Properties link that will open the user-editable part of LAC. How to do it... LAC is divided into five areas: users, organization, dial-in conferencing, meeting invitation, and tools, as you can see in the following screenshot: The Users panel will show us the configuration of the Lync Online enabled users. It is possible to modify the settings with the Edit option (the small pencil icon on the right): I have tried to summarize all the available options (inside the general, external communications, and dial-in conferencing tabs) in the following screenshot: Some of the user's settings are worth a mention; in the General tab, we have the following:    The Record Conversations and meetings option enables the Start recording option in the Lync client    The Allow anonymous attendees to dial-out option controls whether the anonymous users that are dialing-in to a conference are required to call the conferencing service directly or are authorized for callback    The For compliance, turn off non-archived features option disables Lync features that are not recorded by In-Place Hold for Exchange When you place an Exchange 2013 mailbox on In-Place Hold or Litigation Hold, the Microsoft Lync 2013 content (instant messaging conversations and files shared in an online meeting) is archived in the mailbox. In the dial-in conferencing tab, we have the configuration required for dial-in conferencing. The provider's drop-down menu shows a list of third parties that are able to deliver this kind of feature. The Organization tab manages privacy for presence information, push services, and external access (the equivalent of the Lync federation on-premises). If you enable external access, we will have the option to turn on Skype federation, as we can see in the following screenshot: The Dial-In Conferencing option is dedicated to the configuration of the external providers. The Meeting Invitation option allows the user to customize the Lync Meeting invitation. The Tools options offer a collection of troubleshooting resources. See also For details about Exchange In-Place Hold, see the TechNet post In-Place Hold and Litigation Hold at http://technet.microsoft.com/en-us/library/ff637980(v=exchg.150).aspx. Using Lync Online Remote PowerShell The possibility to manage Lync using Remote PowerShell on a distant deployment has been available since Lync 2010. This feature has always required a direct connection from the management station to the Remote Lync, and a series of steps that is not always simple to set up. Lync Online supports Remote PowerShell using a dedicated (64-bit only) PowerShell module, the Lync Online Connector. It is used to manage online users, and it is interesting because there are many settings and automation options that are available only through PowerShell. Getting ready Lync Online Connector requires one of the following operating systems: Windows 7 (with Service Pack 1), Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows 8, or Windows 8.1. At least PowerShell 3.0 is needed. To check it, we can use the $PSVersionTable variable. The result will be like the one in the following screenshot (taken on Windows 8.1, which uses PowerShell 4.0): How to do it... Download Windows PowerShell Module for Lync Online from the Microsoft site at http://www.microsoft.com/en-us/download/details.aspx?id=39366 and install it. It is useful to store our Office 365 credentials in an object (it is possible to launch the cmdlets at step 3 anyway, and we will be required with the Office 365 administrator credentials, but using this method, we will have to insert the authentication information again every time it is required). We can use the $credential = Get-Credential cmdlet in a PowerShell session. We will be prompted for our username and password for Lync Online, as shown in the following screenshot: To use the Online Connector, open a PowerShell session and use the New-CsOnlineSession cmdlet. One of the ways to start a remote PowerShell session is $session = New-CsOnlineSession -Credential $credential. Now, we need to import the session that we have created with Lync Online inside PowerShell, with the Import-PSSession $session cmdlet. A temporary Windows PowerShell module will be created, which contains all the Lync Online cmdlets. The name of the temporary module will be similar to the one we can see in the following screenshot: Now, we will have the cmdlets of the Lync Online module loaded in memory, in addition to any command that we already have available in PowerShell. How it works... The feature is based on a PowerShell module, the LyncOnlineConnector, shown in the following screenshot: It contains only two cmdlets, the Set-WinRMNetworkDelayMS and New-CsOnlineSession cmdlets. The latter will load the required cmdlets in memory. As we have seen in the previous steps, the Online Connector adds the Lync Online PowerShell cmdlets to the ones already available. This is something we will use when talking about hybrid deployments, where we will start from the Lync Management Shell and then import the module for Lync Online. It is a good habit to verify (and close) your previous remote sessions. This can be done by selecting a specific session (using Get-PSSession and then pointing to a specific session with the Remove-PSSession statement) or closing all the existing ones with the Get-PSSession | Remove-PSSession cmdlet. In the previous versions of the module, Microsoft Online Services Sign-In Assistant was required. This prerequisite was removed from the latest version. There's more... There are some checks that we are able to perform when using the PowerShell module for Lync Online. By launching the New-CsOnlineSession cmdlet with the –verbose switch, we will see all the messages related to the opening of the session. The result should be similar to the one shown in the following screenshot: Another verification comes from the Get-Command -Module tmp_gffrkflr.ufz command, where the module name (in this example, tmp_gffrkflr.ufz) is the temporary module we saw during the Import-PSSession step. The output of the command will show all the Lync Online cmdlets that we have loaded in memory. The Import-PSSession cmdlet imports all commands except the ones that have the same name of a cmdlet that already exists in the current PowerShell session. To overwrite the existing cmdlets, we can use the -AllowClobber parameter. See also During the introduction of this section, we also discussed the possibility to administer on-premises, remote Lync Server 2013 deployment with a remote PowerShell session. John Weber has written a great post about it in his blog Lync 2013 Remote Admin with PowerShell at http://tsoorad.blogspot.it/2013/10/lync-2013-remote-admin-with-powershell.html, which is helpful if you want to use the previously mentioned feature. Using Lync Online cmdlets In the previous recipe, we outlined the steps required to establish a remote PowerShell session with Lync Online. We have less than 50 cmdlets, as shown in the result of the Get-Command -Module command in the following screenshot: Some of them are specific for Lync Online, such as the following: Get-CsAudioConferencingProvider Get-CsOnlineUser Get-CsTenant Get-CsTenantFederationConfiguration Get-CsTenantHybridConfiguration Get-CsTenantLicensingConfiguration Get-CsTenantPublicProvider New-CsEdgeAllowAllKnownDomains New-CsEdgeAllowList New-CsEdgeDomainPattern Set-CsTenantFederationConfiguration Set-CsTenantHybridConfiguration Set-CsTenantPublicProvider Update-CsTenantMeetingUrl All the remaining cmdlets can be used either with Lync Online or with the on-premises version of Lync Server 2013. We will see the use of some of the previously mentioned cmdlets. How to do it... The Get-CsTenant cmdlet will list Lync Online tenants configured for use in our organization. The output of the command includes information such as the preferred language, registrar pool, domains, and assigned plan. The Get-CsTenantHybridConfiguration cmdlet gathers information about the hybrid configuration of Lync. Management of the federation capability for Lync Online (the feature that enables Instant Messaging and Presence information exchange with users of other domains) is based on the allowed domain and blocked domain lists, as we can see in the organization and external communications screen of LAC, shown in the following screenshot: There are similar ways to manage federation from the Lync Online PowerShell, but it required to put together different statements as follows:     We can use an accept all domains excluding the ones in the exceptions list approach. To do this, we have put the New-CsEdgeAllowAllKnownDomains cmdlet inside a variable. Then, we can use the Set-CsTenantFederationConfiguration cmdlet to allow all the domains (except the ones in the block list) for one of our domains on a tenant. We can use the example on TechNet (http://technet.microsoft.com/en-us/library/jj994088.aspx) and integrate it with Get-CsTenant.     If we prefer, we can use a block all domains but permit the ones in the allow list approach. It is required to define a domain name (pattern) for every domain to allow the New-CsEdgeDomainPattern cmdlet, and each one of them will be saved in a variable. Then, the New-CsEdgeAllowList cmdlet will create a list of allowed domains from the variables. Finally, the Set-CsTenantFederationConfiguration cmdlet will be used. The domain we will work on will be (again) cc3b6a4e-3b6b-4ad4-90be-6faa45d05642. The example on Technet (http://technet.microsoft.com/en-us/library/jj994023.aspx) will be used: $x = New-CsEdgeDomainPattern -Domain "contoso.com" $y = New-CsEdgeDomainPattern -Domain "fabrikam.com" $newAllowList = New-CsEdgeAllowList -AllowedDomain $x,$y Set-CsTenantFederationConfiguration -Tenant " cc3b6a4e-3b6b-4ad4-90be-6faa45d05642" -AllowedDomains $newAllowList The Get-CsOnlineUser cmdlet provides information about users enabled on Office 365. The result will show both users synced with Active Directory and users homed in the cloud. The command supports filters to limit the output; for example, the Get-CsOnlineUser -identity fab will gather information about the user that has alias = fab. This is an account synced from the on-premises Directory Services, so the value of the DirSyncEnabled parameter will be True. See also All the cmdlets of the Remote PowerShell for Lync Online are listed in the TechNet post Lync Online cmdlets at http://technet.microsoft.com/en-us/library/jj994021.aspx. This is the main source of details on the single statement. Introducing Lync in a hybrid scenario In a Lync hybrid deployment, we have the following: User accounts and related information homed in the on-premises Directory Services and replicated to Office 365. A part of our Lync users that consume on-premises resources and a part of them that use online (Office 365 / Lync Online) resources. The same (public) domain name used both online and on-premises (Lync-split DNS). Other Office 365 services and integration with other applications available to all our users, irrespective of where their Lync is provisioned. One way to define Lync hybrid configuration is by using an on-premises Lync deployment federated with an Office 365 / Lync Online tenant subscription. While it is not a perfect explanation, it gives us an idea of the scenario we are talking about. Not all the features of Lync Server 2013 (especially the ones related to Enterprise Voice) are available to Lync Online users. The previously mentioned motivations, along with others (due to company policies, compliance requirements, and so on), might recommend a hybrid deployment of Lync as the best available solution. What we have to clarify now is how to make those users on different deployments talk to each other, see each other's presence status, and so on. What we will see in this section is a high-level overview of the required steps. The Planning and configuring a hybrid deployment recipe will provide more details about the individual steps. The list of steps here is the one required to configure a hybrid deployment, starting from Lync on-premises. In the following sections, we will also see the opposite scenario (with our initial deployment in the cloud). How to do it... It is required to have an available Office 365 tenant configuration. Our subscription has to include Lync Online. We have to configure an Active Directory Federation Services (AD FS) server in our domain and make it available to the Internet using a public FQDN and an SSL certificate released from a third-party certification authority. Office 365 must be enabled to synchronize with our company's Directory Services, using Active Directory Sync. Our Office 365 tenant must be federated. The last step is to configure Lync for a hybrid deployment. There's more... One of the requirements for a hybrid distribution of Lync is an on-premises deployment of Lync Server 2013 or Lync Server 2010. For Lync Server 2010, it is required to have the latest available updates installed, both on the Front Ends and on the Edge servers. It is also required to have the Lync Server 2013 administrative tools installed on a separate server. More details about supported configuration are available on the TechNet post Planning for Lync Server 2013 hybrid deployments at http://technet.microsoft.com/en-us/library/jj205403.aspx. DNS SRV records for hybrid deployments, _sipfederationtls._tcp.<domain> and _sip._tls.<domain>, should point to the on-premises deployment. The lyncdiscover. <domain> record will point to the FQDN of the on-premises reverse proxy server. The _sip._tls. <domain> SRV record will resolve to the public IP of the Access Edge service of Lync on-premises. Depending on the kind of service we are using for Lync, Exchange, and SharePoint, only a part of the features related to the integration with the additional services might be available. For example, skills search is available only if we are using Lync and SharePoint on-premises. The following TechNet post Supported Lync Server 2013 hybrid configurations at http://technet.microsoft.com/en-us/library/jj945633.aspx offers a matrix of features / service deployment combinations. See also Interesting information about Lync Hybrid configuration is presented in sessions available on Channel9 and coming from the Lync Conference 2014 (Lync Online Hybrid Deep Dive at http://channel9.msdn.com/Events/Lync-Conference/Lync-Conference-2014/ONLI302) and from TechEd North America 2014 (Microsoft Lync Online Hybrid Deep Dive at http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/OFC-B341#fbid=). Planning and configuring a hybrid deployment The planning phase for a hybrid deployment starts from a simple consideration: do we have an on-premises deployment of Lync Server? If the previously mentioned scenario is true, do we want to move users to the cloud or vice versa? Although the first situation is by far the most common, we have to also consider the case in which we have our first deployment in the cloud. How to do it... This step is all that is required for the scenario that starts from Lync Online. We have to completely deploy our Lync on-premises. Establish a remote PowerShell session with Office 365. Use the shared SIP address cmdlet Set-CsTenantFederationConfiguration -SharedSipAddressSpace $True to enable Office 365 to use a Shared Session Initiation Protocol (SIP) address space with our on-premises deployment. To verify this, we can use the Get-CsTenantFederationConfiguration command. The SharedSipAddressSpace value should be set to True. All the following steps are for the scenario that starts from the on-premises Lync deployment. After we have subscribed with a tenant, the first step is to add the public domain we use for our Lync users to Office 365 (so that we can split it on the two deployments). To access the Office 365 portal, select Domains. The next step is Specify a domain name and confirm ownership. We will be required to type a domain name. If our domain is hosted on some specific providers (such as GoDaddy), the verification process can be automated, or we have to proceed manually. The process requires to add one DNS record (TXT or MX), like the ones shown in the following screenshot: If we need to check our Office 365 and on-premises deployments before continuing with the hybrid deployment, we can use the Setup Assistant for Office 365. The tool is available inside the Office 365 portal, but we have to launch it from a domain-joined computer (the login must be performed with the domain administrative credentials). In the Setup menu, we have a Quick Start and an Extend Your Setup option (we have to select the second one). The process can continue installing an app or without software installation, as shown in the following screenshot: The app (which makes the assessment of the existing deployment easier) is installed by selecting Next in the previous screen (it requires at least Windows 7 with Service Pack 1, .NET Framework 3.5, and PowerShell 2.0). Synchronization with the on-premises Active Directory is required. This last step federates Lync Server 2013 with Lync Online to allow communication between our users. The first cmdlet to use is Set-CSAccessEdgeConfiguration -AllowOutsideUsers 1 -AllowFederatedUsers 1 -UseDnsSrvRouting -EnablePartnerDiscovery 1. Note that the -EnablePartnerDiscovery parameter is required. Setting it to 1 enables automatic discovery of federated partner domains. It is possible to set it to 0. The second required cmdlet is New-CSHostingProvider -Identity LyncOnline -ProxyFqdn "sipfed.online.lync.com" -Enabled $true -EnabledSharedAddressSpace $true -HostsOCSUsers $true –VerificationLevel UseSourceVerification -IsLocal $false -AutodiscoverUrl https://webdir.online.lync.com/Autodiscover/AutodiscoverService.svc/root. The result of the commands is shown in the following screenshot: If Lync Online is already defined, we have to use the Set- CSHostingProvider cmdlet, or we can remove it (Remove-CsHostingProvider -Identity LyncOnline) and then create it using the previously mentioned cmdlet. There's more... In the Lync hybrid scenario, users created in the on-premises directory are replicated to the cloud, while users generated in the cloud will not be replicated on-premises. Lync Online users are managed using the Office 365 portal, while the users on-premises are managed using the usual tools (Lync Control Panel and Lync Management Shell). Moving users to the cloud By moving users from Lync on-premises to the cloud, we will lose some of the parameters. The operation requires the Lync administrative tools and the PowerShell module for Lync Online to be installed on the same computer. If we install the module for Lync Online before the administrative tools for Lync 2013 Server, the OCSCore.msi file overwrites the LyncOnlineConnector.ps1 file, and New-CsOnlineSession will require a -TargetServer parameter. In this situation, we have to reinstall the Lync Online module (see the following post on the Microsoft support site at http://support.microsoft.com/kb/2955287). Getting ready Remember that to move the user to Lync Online, they must be enabled for both Lync Server on-premises and Lync Online (so we have to assign the user a license for Lync Online by using the Office 365 portal). Users with no assigned licenses will show the error Move-CsUser : HostedMigration fault: Error=(507), Description=(User must has an assigned license to use Lync Online. For more details, refer to the Microsoft support site at http://support.microsoft.com/kb/2829501. How to do it... Open a new Lync Management Shell session and launch the remote session on Office 365 with the cmdlets' sequence we saw earlier. We have to add the –AllowClobber parameter so that the Lync Online module's cmdlets are able to overwrite the corresponding Lync Management Shell cmdlets: $credential = Get-Credential $session = New-CsOnlineSession -Credential $credential Import-PSSession $session -AllowClobber Open the Lync Admin Center (as we have seen in the dedicated section) by going to Service settings | Lync | Manage settings in the Lync Admin Center, and copy the first part of the URL, for example, https://admin0e.online.lync.com. Add the following string to the previous URL /HostedMigration/hostedmigrationservice.svc (in our example, the result will be https://admin0a.online.lync.com/HostedMigration/hostedmigrationservice.svc). The following cmdlet will move users from Lync on-premises to Lync Online. The required parameters are the identity of the Lync user and the URL that we prepared in step 2. The user identity is fabrizio.volpe@absoluteuc.biz: Move-CsUser -Identity fabrizio.volpe@absoluteuc.biz –Target sipfed.online.lync.com -Credential $creds -HostedMigrationOverrideUrl https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.sVc Usually, we are required to insert (again) the Office 365 administrative credentials, after which we will receive a warning about the fact that we are moving our user to a different version of the service, like the one in the following screenshot: See the There's more... section of this recipe for details about user information that is migrated to Lync Online. We are able to quickly verify whether the user has moved to Lync Online by using the Get-CsUser | fl DisplayName,HostingProvider,RegistrarPool,SipAddress command. On-premises HostingProvider is equal to SRV: and RegistrarPool is madhatter.wonderland.lab (the name of the internal Lync Front End). Lync Online values are HostingProvider : sipfed.online.lync.com, and leave RegistrarPool empty, as shown in the following screenshot (the user Fabrizio is homed on-premises, while the user Fabrizio volpe is homed on the cloud): There's more... If we plan to move more than one user, we have to add a selection and pipe it before the cmdlet we have already used, removing the –identity parameter. For example, to move all users from an Organizational Unit (OU), (for example, the LyncUsers in the Wonderland.Lab domain) to Lync Online, we can use Get-CsUser -OU "OU=LyncUsers,DC=wonderland,DC=lab"| Move-CsUser -Target sipfed.online.lync.com -Credential $creds -HostedMigrationOverrideUrl https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.sVc. We are also able to move users based on a parameter to match using the Get-CsUser –Filter cmdlet. As we mentioned earlier, not all the user information is migrated to Lync Online. Migration contact list, groups, and access control lists are migrated, while meetings, contents, and schedules are lost. We can use the Lync Meeting Update Tool to update the meeting links (which have changed when our user's home server has changed) and automatically send updated meeting invitations to participants. There is a 64-bit version (http://www.microsoft.com/en-us/download/details.aspx?id=41656) and a 32-bit version (http://www.microsoft.com/en-us/download/details.aspx?id=41657) of the previously mentioned tool. Moving users back on-premises It is possible to move back users that have been moved from the on-premises Lync deployment to the cloud, and it is also possible to move on-premises users that have been defined and enabled directly in Office 365. In the latter scenario, it is important to create the user also in the on-premises domain (Directory Service). How to do it… The Lync Online user must be created in the Active Directory (for example, I will define the BornOnCloud user that already exists in Office 365). The user must be enabled in the on-premises Lync deployment, for example, using the Lync Management Shell with the following cmdlet: Enable-CsUser -Identity "BornOnCloud" -SipAddress "SIP:BornOnCloud@absoluteuc.biz" -HostingProviderProxyFqdn "sipfed.online.lync.com" Sync the Directory Services. Now, we have to save our Office 365 administrative credentials in a $cred = Get-Credential variable and then move the user from Lync Online to the on-premises Front End using the Lync Management Shell (the -HostedMigrationOverrideURL parameter has the same value that we used in the previous section): Move-CsUser -Identity BornOnCloud@absoluteuc.biz -Target madhatter.wonderland.lab -Credential $cred -HostedMigrationOverrideURL https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.svc The Get-CsUser | fl DisplayName,HostingProvider,RegistrarPool,SipAddress cmdlet is used to verify whether the user has moved as expected. See also Guy Bachar has published an interesting post on his blog Moving Users back to Lync on-premises from Lync Online (http://guybachar.wordpress.com/2014/03/31/moving-users-back-to-lync-on-premises-from-lync-online/), where he shows how he solved some errors related to the user motion by modifying the HostedMigrationOverrideUrl parameter. Debugging Lync Online issues Getting ready When moving from an on-premises solution to a cloud tenant, the first aspect we have to accept is that we will not have the same level of control on the deployment we had before. The tools we will list are helpful in resolving issues related to Lync Online, but the level of understanding on an issue they give to a system administrator is not the same we have with tools such as Snooper or OCSLogger. Knowing this, the more users we will move to the cloud, the more we will have to use the online instruments. How to do it… The Set up Lync Online external communications site on Microsoft Support (http://support.microsoft.com/common/survey.aspx?scid=sw;en;3592&showpage=1) is a guided walk-through that helps in setting up communication between our Lync Online users and external domains. The tool provides guidelines to assist in the setup of Lync Online for small to enterprise businesses. As you can see in the following screenshot, every single task is well explained: The Remote Connectivity Analyzer (RCA) (https://testconnectivity.microsoft.com/) is an outstanding tool to troubleshoot both Lync on-premises and Lync Online. The web page includes tests to analyze common errors and misconfigurations related to Microsoft services such as Exchange, Lync, and Office 365. To test different scenarios, it is necessary to use various network protocols and ports. If we are working on a firewall-protected network, using the RCA, we are also able to test services that are not directly available to us. For Lync Online, there are some tests that are especially interesting; in the Office 365 tab, the Office 365 General Tests section includes the Office 365 Lync Domain Name Server (DNS) Connectivity Test and the Office 365 Single Sign-On Test, as shown in the following screenshot: The Single Sign-On test is really useful in a scenario. The test requires our domain username and password, both synced with the on-premises Directory Services. The steps include searching the FQDN of our AD FS server on an Internet DNS, verifying the certificate and connectivity, and then validating the token that contains the credentials. The Client tab offers to download the Microsoft Connectivity Analyzer Tool and the Microsoft Lync Connectivity Analyzer Tool, which we will see in the following two dedicated steps: The Microsoft Connectivity Analyzer Tool makes many of the tests we see in the RCA available on our desktop. The list of prerequisites is provided in the article Microsoft Connectivity Analyzer Tool (http://technet.microsoft.com/library/jj851141(v=exchg.80).aspx), and includes Windows Vista/Windows 2008 or later versions of the operating system, .NET Framework 4.5, and an Internet browser, such as Internet Explorer, Chrome, or Firefox. For the Lync tests, a 64-bit operating system is mandatory, and the UCMA runtime 4.0 is also required (it is part of Lync Server 2013 setup, and is also available for download at http://www.microsoft.com/en-us/download/details.aspx?id=34992). The tools propose ways to solve different issues, and then, they run the same tests available on the RCA site. We are able to save the results in an HTML file. The Microsoft Lync Connectivity Analyzer Tool is dedicated to troubleshooting the clients for mobile devices (the Lync Windows Store app and Lync apps). It tests all the required configurations, including autodiscover and webticket services. The 32-bit version is available at http://www.microsoft.com/en-us/download/details.aspx?id=36536, while the 64-bit version can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=36535. .NET Framework 4.5 is required. The tool itself requires a few configuration parameters; we have to insert the user information that we usually add in the Lync app, and we have to use a couple of drop-down menus to describe the scenario we are testing (on-premises or Internet, and the kind of client we are going to test). The Show drop-down menu enables us to look not only at a summary of the test results but also at the detailed information. The detailed view includes all the information and requests sent and received during the test, with the FQDN included in the answer ticket from our services, and so on, as shown in the following screenshot: The Troubleshooting Lync Online sign-in post is a support page, available in two different versions (admins and users), and is a walk-through to help admins (or users) to troubleshoot login issues. The admin version is available at http://support.microsoft.com/common/survey.aspx?scid=sw;en;3695&showpage=1, while the user version is available at http://support.microsoft.com/common/survey.aspx?scid=sw;en;3719&showpage=1. Based on our answers to the different scenario questions, the site will propose to information or solution steps. The following screenshot is part of the resolution for the log-I issues of a company that has an enterprise subscription with a custom domain: The Office 365 portal includes some information to help us monitor our Lync subscription. In the Service Health menu, navigate to Service Health; we have a list of all the incidents and service issues of the past days. In the Reports menu, we have statistics about our Office 365 consumption, including Lync. In the following screenshot, we can see the previously mentioned pages: There's more... One interesting aspect of the Microsoft Lync Connectivity Analyzer Tool that we have seen is that it enables testing for on-premises or Office 365 accounts (both testing from inside our network and from the Internet). The previously mentioned capability makes it a great tool to troubleshoot the configuration for Lync on the mobile devices that we have deployed in our internal network. This setup is usually complex, including hair-pinning and split DNS, so the diagnostic is important to quickly find misconfigured services. See also The Troubleshooting Lync Sign-in Errors (Administrators) page on Office.com at http://office.microsoft.com/en-001/communicator-help/troubleshooting-lync-sign-in-errors-administrators-HA102759022.aspx contains a list of messages related to sign-in errors with a suggested solution or a link to additional external resources. Summary In this article, we have learned about managing Lync 2013 and Lync Online and using Lync Online Remote PowerShell and Lync Online cmdlets. Resources for Article: Further resources on this subject: Adding Dialogs [article] Innovation of Communication and Information Technologies [article] Choosing Lync 2013 Clients [article]
Read more
  • 0
  • 0
  • 6258

article-image-connecting-database
Packt
28 Nov 2014
20 min read
Save for later

Connecting to a database

Packt
28 Nov 2014
20 min read
In this article by Christopher Ritchie, the author of R WildFly Configuration, Deployment, and Administration, Second Edition, you will learn to configure enterprise services and components, such as transactions, connection pools, and Enterprise JavaBeans. (For more resources related to this topic, see here.) To allow your application to connect to a database, you will need to configure your server by adding a datasource. Upon server startup, each datasource is prepopulated with a pool of database connections. Applications acquire a database connection from the pool by doing a JNDI lookup and then calling getConnection(). Take a look at the following code: Connection result = null; try {    Context initialContext = new InitialContext();    DataSource datasource =    (DataSource)initialContext.lookup("java:/MySqlDS");    result = datasource.getConnection(); } catch (Exception ex) {    log("Cannot get connection: " + ex);} After the connection has been used, you should always call connection.close() as soon as possible. This frees the connection and allows it to be returned to the connection pool—ready for other applications or processes to use. Releases prior to JBoss AS 7 required a datasource configuration file (ds.xml) to be deployed with the application. Ever since the release of JBoss AS 7, this approach has no longer been mandatory due to the modular nature of the application server. Out of the box, the application server ships with the H2 open source database engine (http://www.h2database.com), which, because of its small footprint and browser-based console, is ideal for testing purposes. However, a real-world application requires an industry-standard database, such as the Oracle database or MySQL. In the following section, we will show you how to configure a datasource for the MySQL database. Any database configuration requires a two step procedure, which is as follows: Installing the JDBC driver Adding the datasource to your configuration Let's look at each section in detail. Installing the JDBC driver In WildFly's modular server architecture, you have a couple of ways to install your JDBC driver. You can install it either as a module or as a deployment unit. The first and recommended approach is to install the driver as a module. We will now look at a faster approach to installing the driver. However, it does have various limitations, which we will cover shortly. The first step to install a new module is to create the directory structure under the modules folder. The actual path for the module is JBOSS_HOME/modules/<module>/main. The main folder is where all the key module components are installed, namely, the driver and the module.xml file. So, next we need to add the following units: JBOSS_HOME/modules/com/mysql/main/mysql-connector-java-5.1.30-bin.jar JBOSS_HOME/modules/com/mysql/main/module.xml The MySQL JDBC driver used in this example, also known as Connector/J, can be downloaded for free from the MySQL site (http://dev.mysql.com/downloads/connector/j/). At the time of writing, the latest version is 5.1.30. The last thing to do is to create the module.xml file. This file contains the actual module definition. It is important to make sure that the module name (com.mysql) corresponds to the module attribute defined in the your datasource. You must also state the path to the JDBC driver resource and finally add the module dependencies, as shown in the following code: <module name="com.mysql">    <resources>        <resource-root path="mysql-connector-java-5.1.30-bin.jar"/>    </resources>    <dependencies>        <module name="javax.api"/>        <module name="javax.transaction.api"/>    </dependencies> </module> Here is a diagram showing the final directory structure of this new module: You will notice that there is a directory structure already within the modules folder. All the system libraries are housed inside the system/layers/base directory. Your custom modules should be placed directly inside the modules folder and not with the system modules. Adding a local datasource Once the JDBC driver is installed, you need to configure the datasource within the application server's configuration file. In WildFly, you can configure two kinds of datasources, local datasources and xa-datasources, which are distinguishable by the element name in the configuration file. A local datasource does not support two-phase commits using a java.sql.Driver. On the other hand, an xa-datasource supports two-phase commits using a javax.sql.XADataSource. Adding a datasource definition can be completed by adding the datasource definition within the server configuration file or by using the management interfaces. The management interfaces are the recommended way, as they will accurately update the configuration for you, which means that you do not need to worry about getting the correct syntax. In this article, we are going to add the datasource by modifying the server configuration file directly. Although this is not the recommended approach, it will allow you to get used to the syntax and layout of the file. In this article, we will show you how to add a datasource using the management tools. Here is a sample MySQL datasource configuration that you can copy into your datasources subsystem section within the standalone.xml configuration file: <datasources> <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool"    enabled="true" jta="true" use-java-context="true" use-ccm="true">    <connection-url>      jdbc:mysql://localhost:3306/MyDB    </connection-url>    <driver>mysql</driver>    <pool />    <security>      <user-name>jboss</user-name>      <password>jboss</password>    </security>    <statement/>    <timeout>      <idle-timeout-minutes>0</idle-timeout-minutes>      <query-timeout>600</query-timeout>    </timeout> </datasource> <drivers>    <driver name="mysql" module="com.mysql"/> </drivers> </datasources> As you can see, the configuration file uses the same XML schema definition from the earlier -*.ds.xml file, so it will not be difficult to migrate to WildFly from previous releases. In WildFly, it's mandatory that the datasource is bound into the java:/ or java:jboss/ JNDI namespace. Let's take a look at the various elements of this file: connection-url: This element is used to define the connection path to the database. driver: This element is used to define the JDBC driver class. pool: This element is used to define the JDBC connection pool properties. In this case, we are going to leave the default values. security: This element is used to configure the connection credentials. statement: This element is added just as a placeholder for statement-caching options. timeout: This element is optional and contains a set of other elements, such as query-timeout, which is a static configuration of the maximum seconds before a query times out. Also the included idle-timeout-minutes element indicates the maximum time a connection may be idle before being closed; setting it to 0 disables it, and the default is 15 minutes. Configuring the connection pool One key aspect of the datasource configuration is the pool element. You can use connection pooling without modifying any of the existing WildFly configurations, as, without modification, WildFly will choose to use default settings. If you want to customize the pooling configuration, for example, change the pool size or change the types of connections that are pooled, you will need to learn how to modify the configuration file. Here's an example of pool configuration, which can be added to your datasource configuration: <pool>    <min-pool-size>5</min-pool-size>    <max-pool-size>10</max-pool-size>    <prefill>true</prefill>    <use-strict-min>true</use-strict-min>    <flush-strategy>FailingConnectionOnly</flush-strategy> </pool> The attributes included in the pool configuration are actually borrowed from earlier releases, so we include them here for your reference: Attribute Meaning initial-pool-size This means the initial number of connections a pool should hold (default is 0 (zero)). min-pool-size This is the minimum number of connections in the pool (default is 0 (zero)). max-pool-size This is the maximum number of connections in the pool (default is 20). prefill This attempts to prefill the connection pool to the minimum number of connections. use-strict-min This determines whether idle connections below min-pool-size should be closed. allow-multiple-users This determines whether multiple users can access the datasource through the getConnection method. This has been changed slightly in WildFly. In WildFly, the line <allow-multiple-users>true</allow-multiple-users> is required. In JBoss AS 7, the empty element <allow-multiple-users/> was used. capacity This specifies the capacity policies for the pool—either incrementer or decrementer. connection-listener Here, you can specify org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that allows you to listen for connection callbacks, such as activation and passivation. flush-strategy This specifies how the pool should be flushed in the event of an error (default is FailingConnectionsOnly). Configuring the statement cache For each connection within a connection pool, the WildFly server is able to create a statement cache. When a prepared statement or callable statement is used, WildFly will cache the statement so that it can be reused. In order to activate the statement cache, you have to specify a value greater than 0 within the prepared-statement-cache-size element. Take a look at the following code: <statement>    <track-statements>true</track-statements>    <prepared-statement-cache-size>10</prepared-statement-cache-size>    <share-prepared-statements/> </statement> Notice that we have also set track-statements to true. This will enable automatic closing of statements and ResultSets. This is important if you want to use prepared statement caching and/or don't want to prevent cursor leaks. The last element, share-prepared-statements, can only be used when the prepared statement cache is enabled. This property determines whether two requests in the same transaction should return the same statement (default is false). Adding an xa-datasource Adding an xa-datasource requires some modification to the datasource configuration. The xa-datasource is configured within its own element, that is, within the datasource. You will also need to specify the xa-datasource class within the driver element. In the following code, we will add a configuration for our MySQL JDBC driver, which will be used to set up an xa-datasource: <datasources> <xa-datasource jndi-name="java:/XAMySqlDS" pool-name="MySqlDS_Pool"    enabled="true" use-java-context="true" use-ccm="true">    <xa-datasource-property name="URL">      jdbc:mysql://localhost:3306/MyDB    </xa-datasource-property>    <xa-datasource-property name="User">jboss    </xa-datasource-property>    <xa-datasource-property name="Password">jboss    </xa-datasource-property>    <driver>mysql-xa</driver> </xa-datasource> <drivers>    <driver name="mysql-xa" module="com.mysql">      <xa-datasource-class>       com.mysql.jdbc.jdbc2.optional.MysqlXADataSource      </xa-datasource-class>    </driver> </drivers> </datasources> Datasource versus xa-datasource You should use an xa-datasource in cases where a single transaction spans multiple datasources, for example, if a method consumes a Java Message Service (JMS) and updates a Java Persistence API (JPA) entity. Installing the driver as a deployment unit In the WildFly application server, every library is a module. Thus, simply deploying the JDBC driver to the application server will trigger its installation. If the JDBC driver consists of more than a single JAR file, you will not be able to install the driver as a deployment unit. In this case, you will have to install the driver as a core module. So, to install the database driver as a deployment unit, simply copy the mysql-connector-java-5.1.30-bin.jar driver into the JBOSS_HOME/standalone/deployments folder of your installation, as shown in the following image: Once you have deployed your JDBC driver, you still need to add the datasource to your server configuration file. The simplest way to do this is to paste the following datasource definition into the configuration file, as follows: <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool" enabled="true" jta="true" use-java-context="true" use-ccm="true"> <connection-url>    jdbc:mysql://localhost:3306/MyDB </connection-url> <driver>mysql-connector-java-5.1.130-bin.jar</driver> <pool /> <security>    <user-name>jboss</user-name>    <password>jboss</password> </security> </datasource> Alternatively, you can use the command-line interface (CLI) or the web administration console to achieve the same result. What about domain deployment? In this article, we are discussing the configuration of standalone servers. The services can also be configured in the domain servers. Domain servers, however, don't have a specified folder scanned for deployment. Rather, the management interfaces are used to inject resources into the domain. Choosing the right driver deployment strategy At this point, you might wonder about a best practice for deploying the JDBC driver. Installing the driver as a deployment unit is a handy shortcut; however, it can limit its usage. Firstly, it requires a JDBC 4-compliant driver. Deploying a non-JDBC-4-compliant driver is possible, but it requires a simple patching procedure. To do this, create a META-INF/services structure containing the java.sql.Driver file. The content of the file will be the driver name. For example, let's suppose you have to patch a MySQL driver—the content will be com.mysql.jdbc.Driver. Once you have created your structure, you can package your JDBC driver with any zipping utility or the .jar command, jar -uf <your -jdbc-driver.jar> META-INF/services/java.sql.Driver. The most current JDBC drivers are compliant with JDBC 4 although, curiously, not all are recognized as such by the application server. The following table describes some of the most used drivers and their JDBC compliance: Database Driver JDBC 4 compliant Contains java.sql.Driver MySQL mysql-connector-java-5.1.30-bin.jar Yes, though not recognized as compliant by WildFly Yes PostgreSQL postgresql-9.3-1101.jdbc4.jar Yes, though not recognized as compliant by WildFly Yes Oracle ojdbc6.jar/ojdbc5.jar Yes Yes Oracle ojdbc4.jar No No As you can see, the most notable exception to the list of drivers is the older Oracle ojdbc4.jar, which is not compliant with JDBC 4 and does not contain the driver information in META-INF/services/java.sql.Driver. The second issue with driver deployment is related to the specific case of xa-datasources. Installing the driver as deployment means that the application server by itself cannot deduce the information about the xa-datasource class used in the driver. Since this information is not contained inside META-INF/services, you are forced to specify information about the xa-datasource class for each xa-datasource you are going to create. When you install a driver as a module, the xa-datasource class information can be shared for all the installed datasources. <driver name="mysql-xa" module="com.mysql"> <xa-datasource-class>    com.mysql.jdbc.jdbc2.optional.MysqlXADataSource </xa-datasource-class> </driver> So, if you are not too limited by these issues, installing the driver as a deployment is a handy shortcut that can be used in your development environment. For a production environment, it is recommended that you install the driver as a static module. Configuring a datasource programmatically After installing your driver, you may want to limit the amount of application configuration in the server file. This can be done by configuring your datasource programmatically This option requires zero modification to your configuration file, which means greater application portability. The support to configure a datasource programmatically is one of the cool features of Java EE that can be achieved by using the @DataSourceDefinition annotation, as follows: @DataSourceDefinition(name = "java:/OracleDS", className = " oracle.jdbc.OracleDriver", portNumber = 1521, serverName = "192.168.1.1", databaseName = "OracleSID", user = "scott", password = "tiger", properties = {"createDatabase=create"}) @Singleton public class DataSourceEJB { @Resource(lookup = "java:/OracleDS") private DataSource ds; } In this example, we defined a datasource for an Oracle database. It's important to note that, when configuring a datasource programmatically, you will actually bypass JCA, which proxies requests between the client and the connection pool. The obvious advantage of this approach is that you can move your application from one application server to another without the need for reconfiguring its datasources. On the other hand, by modifying the datasource within the configuration file, you will be able to utilize the full benefits of the application server, many of which are required for enterprise applications. Configuring the Enterprise JavaBeans container The Enterprise JavaBeans (EJB) container is a fundamental part of the Java Enterprise architecture. The EJB container provides the environment used to host and manage the EJB components deployed in the container. The container is responsible for providing a standard set of services, including caching, concurrency, persistence, security, transaction management, and locking services. The container also provides distributed access and lookup functions for hosted components, and it intercepts all method invocations on hosted components to enforce declarative security and transaction contexts. Take a look at the following figure: As depicted in this image, you will be able to deploy the full set of EJB components within WildFly: Stateless session bean (SLSB): SLSBs are objects whose instances have no conversational state. This means that all bean instances are equivalent when they are not servicing a client. Stateful session bean (SFSB): SFSBs support conversational services with tightly coupled clients. A stateful session bean accomplishes a task for a particular client. It maintains the state for the duration of a client session. After session completion, the state is not retained. Message-driven bean (MDB): MDBs are a kind of enterprise beans that are able to asynchronously process messages sent by any JMS producer. Singleton EJB: This is essentially similar to a stateless session bean; however, it uses a single instance to serve the client requests. Thus, you are guaranteed to use the same instance across invocations. Singletons can use a set of events with a richer life cycle and a stricter locking policy to control concurrent access to the instance. No-interface EJB: This is just another view of the standard session bean, except that local clients do not require a separate interface, that is, all public methods of the bean class are automatically exposed to the caller. Interfaces should only be used in EJB 3.x if you have multiple implementations. Asynchronous EJB: These are able to process client requests asynchronously just like MDBs, except that they expose a typed interface and follow a more complex approach to processing client requests, which are composed of: The fire-and-forget asynchronous void methods, which are invoked by the client The retrieve-result-later asynchronous methods having a Future<?> return type EJB components that don't keep conversational states (SLSB and MDB) can be optionally configured to emit timed notifications. Configuring the EJB components Now that we have briefly outlined the basic types of EJB, we will look at the specific details of the application server configuration. This comprises the following components: The SLSB configuration The SFSB configuration The MDB configuration The Timer service configuration Let's see them all in detail. Configuring the stateless session beans EJBs are configured within the ejb3.2.0 subsystem. By default, no stateless session bean instances exist in WildFly at startup time. As individual beans are invoked, the EJB container initializes new SLSB instances. These instances are then kept in a pool that will be used to service future EJB method calls. The EJB remains active for the duration of the client's method call. After the method call is complete, the EJB instance is returned to the pool. Because the EJB container unbinds stateless session beans from clients after each method call, the actual bean class instance that a client uses can be different from invocation to invocation. Have a look at the following diagram: If all instances of an EJB class are active and the pool's maximum pool size has been reached, new clients requesting the EJB class will be blocked until an active EJB completes a method call. Depending on how you have configured your stateless pool, an acquisition timeout can be triggered if you are not able to acquire an instance from the pool within a maximum time. You can either configure your session pool through your main configuration file or programmatically. Let's look at both approaches, starting with the main configuration file. In order to configure your pool, you can operate on two parameters: the maximum size of the pool (max-pool-size) and the instance acquisition timeout (instance-acquisition-timeout). Let's see an example: <subsystem > <session-bean> <stateless>    <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> ... </session-bean> ... <pools> <bean-instance-pools>    <strict-max-pool name="slsb-strict-max-pool" max-pool-size=      "25" instance-acquisition-timeout="5" instance-acquisition-      timeout-unit="MINUTES"/> </bean-instance-pools> </pools> ... </subsystem> In this example, we have configured the SLSB pool with a strict upper limit of 25 elements. The strict maximum pool is the only available pool instance implementation; it allows a fixed number of concurrent requests to run at one time. If there are more requests running than the pool's strict maximum size, those requests will get blocked until an instance becomes available. Within the pool configuration, we have also set an instance-acquisition-timeout value of 5 minutes, which will come into play if your requests are larger than the pool size. You can configure as many pools as you like. The pool used by the EJB container is indicated by the attribute pool-name on the bean-instance-pool-ref element. For example, here we have added one more pool configuration, largepool, and set it as the EJB container's pool implementation. Have a look at the following code: <subsystem > <session-bean>    <stateless>      <bean-instance-pool-ref pool-name="large-pool"/>    </stateless> </session-bean> <pools>    <bean-instance-pools>      <strict-max-pool name="large-pool" max-pool-size="100"        instance-acquisition-timeout="5"        instance-acquisition-timeout-unit="MINUTES"/>    <strict-max-pool name="slsb-strict-max-pool"      max-pool-size="25" instance-acquisition-timeout="5"      instance-acquisition-timeout-unit="MINUTES"/>    </bean-instance-pools> </pools> </subsystem> Using the CLI to configure the stateless pool size We have detailed the steps necessary to configure the SLSB pool size through the main configuration file. However, the suggested best practice is to use CLI to alter the server model. Here's how you can add a new pool named large-pool to your EJB 3 subsystem: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool: add(max-pool-size=100) Now, you can set this pool as the default to be used by the EJB container, as follows: /subsystem=ejb3:write-attribute(name=default-slsb-instance-pool, value=large-pool) Finally, you can, at any time, change the pool size property by operating on the max-pool-size attribute, as follows: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool:write- attribute(name="max-pool-size",value=50) Summary In this article, we continued the analysis of the application server configuration by looking at Java's enterprise services. We first learned how to configure datasources, which can be used to add database connectivity to your applications. Installing a datasource in WildFly 8 requires two simple steps: installing the JDBC driver and adding the datasource into the server configuration. We then looked at the enterprise JavaBeans subsystem, which allows you to configure and tune your EJB container. We looked at the basic EJB component configuration of SLSB. Resources for Article: Further resources on this subject: Dart with JavaScript [article] Creating Java EE Applications [article] OpenShift for Java Developers [article]
Read more
  • 0
  • 0
  • 4002

article-image-how-to-build-a-recommender-server-with-mahout-and-solr
Pat Ferrel
24 Sep 2014
8 min read
Save for later

How to Build a Recommender Server with Mahout and Solr

Pat Ferrel
24 Sep 2014
8 min read
In the last post, Mahout on Spark: Recommenders we talked about creating a co-occurrence indicator matrix for a recommender using Mahout. The goals for a recommender are many, but first they must be fast and must make “good” personalized recommendations. We’ll start with the basics and improve on the “good” part as we go. As we saw last time, co-occurrence or item-based recommenders are described by: rp = hp[AtA] Calculating [AtA] We needed some more interesting data first so I captured video preferences by mining the web. The target demo would be a Guide to Online Video. Input was collected for many users by simply logging their preferences to CSV text files: 906507914,dislike,mars_needs_moms 906507914,dislike,mars_needs_moms 906507914,like,moneyball 906507914,like,the_hunger_games 906535685,like,wolf_creek 906535685,like,inside 906535685,like,le_passe 576805712,like,persepolis 576805712,dislike,barbarella 576805712,like,gentlemans_agreement 576805712,like,europa_report 576805712,like,samsara 596511523,dislike,a_hijacking 596511523,like,the_kings_speech … The technique for actually using multiple actions hasn’t been described yet so for now we’ll use the dislikes in the application to filter out recommendations and use only the likes to calculate recommendations. That means we need to use only the “like” preferences. The Mahout 1.0 version of spark-itemsimilarity can read these files directly and filter out all but the lines with “like” in them: mahout spark-itemsimilarity –i root-input-dir -o output-dir --filter1 like -fc 1 –ic 2 #filter out all but likes #indicate columns for filter and items --omitStrength This will give us output like this: the_hunger_games_catching_fire<tab>holiday_inn 2_guns superman_man_of_steel five_card_stud district_13 blue_jasmine this_is_the_end riddick ... law_abiding_citizen<tab>stone ong_bak_2 the_grey american centurion edge_of_darkness orphan hausu buried ... munich<tab>the_devil_wears_prada brothers marie_antoinette singer brothers_grimm apocalypto ghost_dog_the_way_of_the_samurai ... private_parts<tab>puccini_for_beginners finding_forrester anger_management small_soldiers ice_age_2 karate_kid magicians ... world-war-z<tab>the_wolverine the_hunger_games_catching_fire ghost_rider_spirit_of_vengeance holiday_inn the_hangover_part_iii ... This is a tab delimited file with a video itemID followed by a string containing a list of similar videos that is space delimited. The similar video list may contain some surprises because here “similarity” means “liked by similar people”. It doesn’t mean the videos were similar in content or genre, so don’t worry if they look odd. We’ll use another technique to make “on subject” recommendations later. Anyone familiar with the Mahout first generation recommender will notice right away that we are using IDs that have meaning to the application, whereas before Mahout required its own integer IDs. A fast, scalable similarity engine In Mahout’s first generation recommenders, all recs were calculated for all users. This meant that new users would have to wait for a long running batch job to happen before they saw recommendations. In the Guide demo app, we want to make good recommendations to new users and use new preferences in real time. We have already calculated [AtA] indicating item similarities so we need a real-time method for the final part of the equation rp = hp[AtA]. Capturing hp is the first task and in the demo we log all actions to a database in real time. This may have scaling issues but is fine for a demo. Now we will make use of the “multiply as similarity” idea we introduced in the first post. Multiplying hp[AtA] can be done with a fast similarity engine—otherwise known as a search engine. At their core, search engines are primarily similarity engines that index textual data (AKA a matrix of token vectors) and take text as the query (a token vector). Another way to look at this is that search engines find by example—they are optimized to find a collection of items by similarity to the query. We will use the search engine to find the most similar indicator vectors in [AtA] to our query hp, thereby producing rp. Using this method, rp will be the list of items returned from the search—row IDs in [AtA]. Spark-itemsimilarity is designed to create output that can be directly indexed by search engines. In the Guide demo we chose to create a catalog of items in a database and to use Solr to index columns in the database. Both Solr and Elasticsearch have highly scalable fast engines that can perform searches on database columns or a variety of text formats so you don’t have to use a database to store the indicators. We loaded the indicators along with some metadata about the items into the database like this: itemID foreign-key genres indicators 123 world-war-z sci-fi action the_wolverine … 456 captain_phillips action drama pi when_com… … So, the foreign-key is our video item ID from the indicator output and the indicator is the space-delimited list of similar video item IDs. We must now set the search engine to index the indicators. This integration is usually pretty simple and depends on what database you use or if you are storing the entire catalog in files (leaving the database out of the loop). Once you’ve triggered the indexing of your indicators, we are ready for the query. The query will be a preference history vector consisting of the same tokens/video item IDs you see in the indicators. For a known user these should be logged and available, perhaps in the database, but for a new user we’ll have to find a way to encourage preference feedback. New users The demo site asks a new user to create an account and run through a trainer that collects important preferences from the user. We can probably leave the details of how to ask for “important” preferences for later. Suffice to say, we clustered items and took popular ones from each cluster so that the users were more likely to have seen them. From this we see that the user liked: argo django iron_man_3 pi looper … Whether you are responding to a new user or just accounting for the most recent preferences of returning users, recentness is very important. Using the previous IDs as a query on the indicator field of the database returns recommendations, even though the new user’s data was not used to train the recommender. Here’s what we get: The first line shows the result of the search engine query for the new user. The trainer on the demo site has several pages of examples to rate and the more you rate the better the recommendations become, as one would expect but these look pretty good given only 9 ratings. I can make a value judgment because they were rated by me. In a small sampling of 20 people using the site and after having them complete the entire 20 pages of training examples, we asked them to tell us how many of the recommendations on the first line were things they liked or would like to see. We got 53-90% right. Only a few people participated and your data will vary greatly but this was at least some validation. The second line of recommendations and several more below it are calculated using a genre and this begins to show the power of the search engine method. In the trainer I picked movies where the number 1 genre was “drama”. If you have the search engine index both indicators as well as genres you can combine indicator and genre preferences in the query. To produce line 1 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” To produce line 2 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” genre field: “drama”; boost: 5 The boost is used to skew results towards a field. In practice this will give you mostly matching genres but is not the same as a filter, which can also be used if you want a guarantee that the results will be from “drama”. Conclusion Combining a search engine with Mahout created a recommender that is extremely fast and scalable but also seamlessly blends results using collaborative filtering data and metadata. Using metadata boosting in the query allows us to skew results in a direction that makes sense.  Using multiple fields in a single query gives us even more power than this example shows. It allows us to mix in different actions. Remember the “dislike” action that we discarded? One simple and reasonable way to use that is to filter results by things the user disliked, and the demo site does just that. But we can go even further; we can use dislikes in a cross-action recommender. Certain of the user’s dislikes might even predict what they will like, but that requires us to go back to the original equation so we’ll leave it for another post.  About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site at https://finderbots.com or @occam on Twitter.
Read more
  • 0
  • 0
  • 3699
article-image-getting-ready-your-first-biztalk-services-solution
Packt
24 Mar 2014
5 min read
Save for later

Getting Ready for Your First BizTalk Services Solution

Packt
24 Mar 2014
5 min read
(For more resources related to this topic, see here.) Deployment considerations You will need to consider the BizTalk Services edition required for your production use as well as the environment for test and/or staging purposes. This depends on decision points such as: Expected message load on the target system Capabilities that are required now versus 6 months down the line IT requirements around compliance, security, and DR The list of capabilities across different editions is outlined in the Windows Azure documentation page at http://www.windowsazure.com/en-us/documentation/articles/biztalk-editions-feature-chart. Note on BizTalk Services editions and signup BizTalk Services is currently available in four editions: Developer, Basic, Standard, and Premium, each with varying capabilities and prices. You can sign up for BizTalk Services from the Azure portal. The Developer SKU contains all features needed to try and evaluate without worrying about production readiness. We use the Developer edition for all examples. Provisioning BizTalk Services BizTalk Services deployment can be created using the Windows Azure Management Portal or using PowerShell. We will use the former in this example. Certificates and ACS Certificates are required for communication using SSL, and Access Control Service is used to secure the endpoints of the BizTalk Services deployment. First, you need to know whether you need a custom domain for the BizTalk Services deployment. In the case of test or developer deployments, the answer is mostly no. A BizTalk Services deployment will autogenerate a self-signed certificate with an expiry of close to 5 years. The ACS required for deployment will also be autocreated. Certificate and Access Control Service details are required for sending messages to bridges and agreements and can be retrieved from the Dashboard page post deployment. Storage requirements You need to create an Azure SQL database for tracking data. It is recommended to use the Business edition with the appropriate size; for test purposes, you can start with the 1 GB Web edition. You also need to pass the storage account credentials to archive message data. It is recommended that you create a new Azure SQL database and Storage account for use with BizTalk Services only. The BizTalk Services create wizard Now that we have the security and storage details figured out, let us create a BizTalk Services deployment from the Azure Management Portal: From the Management portal, navigate to New | App Services | BizTalk Service | Custom Create. Enter a unique name for the deployment, keeping the following values—EDITION: Developer, REGION: East US, TRACKING DATABASE: Create a new SQL Database instance. In the next page, retain the default database name, choose the SQL server, and enter the server login name and password. There can be six SQL server instances per Azure subscription. In the next page, choose the storage account for archiving and monitoring information. Deploy the solution. The BizTalk Services create wizard from Windows Azure Management Portal The deployment takes roughly 30 minutes to complete. After completion, you will see the status of the deployment as Active. Navigate to the deployment dashboard page; click on CONNECTION INFORMATION and note down the ACS credentials and download the deployment SSL certificate. The SSL certificate needs to be installed on the client machine where the Visual Studio SDK will be used. BizTalk portal registration We have one step remaining, and that is to configure the BizTalk Services Management portal to view agreements, bridges, and their tracking data. For this, perform the following steps: Click on Manage from the Dashboard screen. This will launch <mydeployment>.portal.biztalk.windows.net, where the BizTalk Portal is hosted. Some of the fields, such as the user's live ID and deployment name, will be auto-populated. Enter the ACS Issuer name and ACS Issuer secret noted in the previous step and click on Register. BizTalk Services Portal registration Creating your first BizTalk Services solution Let us put things into action and use the deployment created earlier to address a real-world multichannel sales scenario. Scenario description A trader, Northwind, manages an e-commerce website for online customer purchases. They also receive bulk orders from event firms and corporates for their goods. Northwind needs to develop a solution to validate an order and route the request to the right inventory location for delivery of the goods. The incoming request is an XML file with the order details. The request from event firms and corporates is over FTP, while e-commerce website requests are over HTTP. Post processing of the order, if the customer location is inside the US, then the request are forwarded to a relay service at a US address. For all other locations, the order needs to go to the central site and is sent to a Service Bus Queue at IntlAddress with the location as a promoted property. Prerequisites Before we start, we need to set up the client machine to connect to the deployment created earlier by performing the following steps: Install the certificate downloaded from the deployment on your client box to the trusted root store. This authenticates any SSL traffic that is between your client and the integration solution on Azure. Download and install the BizTalk Services SDK (https://go.microsoft.com/fwLink/?LinkID=313230) so the developer project experience lights up in Visual Studio 2012. Download the BizTalk Services EAI tools' Message Sender and Message Receiver samples from the MSDN Code Gallery available at http://code.msdn.microsoft.com/windowsazure. Realizing the solution We will break down the implementation details into defining the incoming format and creating the bridge, including transports to process incoming messages and the creation of the target endpoints, relay, and Service Bus Queue. Creating a BizTalk Services project You can create a new BizTalk Services project in Visual Studio 2012. BizTalk Services project in Visual Studio Summary This article discussed deployment considerations, provisioning BizTalk Services, BizTalk portal registration, and prerequisites for creating your first BizTalk Services solution. Resources for Article: Further resources on this subject: Using Azure BizTalk Features [Article] BizTalk Application: Dynamics AX Message Outflow [Article] Setting up a BizTalk Server Environment [Article]
Read more
  • 0
  • 0
  • 1261

article-image-achieving-site-resilience-mailbox-server
Packt
14 Feb 2014
14 min read
Save for later

Achieving site resilience for the Mailbox server

Packt
14 Feb 2014
14 min read
(For more resources related to this topic, see here.) Now that we have high availability and failover at the namespace level between datacenters, we need to achieve the same for the Mailbox server role. This is accomplished in a similar way to Exchange 2010, by extending a DAG across two or more datacenters. An organization's SLA covering failure and disaster recovery scenarios is what mostly influences a DAG's design. Every aspect needs to be considered: the number of DAGs to be deployed, the number of members in the DAG(s), the number of database copies, if site resilience is to be used and whether it has to be used for all users or just a subset, if the multiple site solution will be active/active or active/passive, and so on. As to the latter, there are generally three main scenarios when considering a two-datacenter model. Scenario 1 – active/passive In an active/passive configuration, all users' databases are mounted in an active (primary) datacenter, with a passive (standby) datacenter used only in the event of a disaster affecting the active datacenter; this is shown in the following diagram: There are several reasons why organizations might choose this model. Usually, it is because the passive datacenter is not as well equipped as the active one and, as such, is not capable of efficiently hosting all the services provided in the active datacenter. Sometimes, it is simply due to the fact that most or all users are closer to the active datacenter. In this example, a copy of each database in the New York datacenter is replicated to the MBX4 server in the New Jersey datacenter so they can be used in a disaster recovery scenario to provide messaging services to users. By also having database replicas in New York, we provide intrasite resilience. For example, if server MBX1 goes down, database DB1, which is currently mounted on server MBX1, will automatically failover to servers MBX2 or MBX3 without users even realizing or being affected. In some failure scenarios where a server shutdown is initiated (for example, when an Uninterruptible Power Supply (UPS) issues a shutdown command to the server), Exchange tries to activate another copy of the database(s) that the server is hosting, before the shutdown is complete. In case of a hard failure (for example, hardware failure), it will be the other servers detecting the problem and automatically mounting the affected database(s) on another server. In this scenario, we could lose up to two Mailbox servers in New York before having to perform a datacenter switchover. As New York is considered the primary datacenter, the witness server is placed in New York. If, for some reason, the primary site is lost, the majority of the quorum voters is lost, so the entire DAG goes offline. At this stage, administrators have to perform a datacenter switchover, just like in Exchange 2010. However, because the recovery of a DAG is decoupled from the recovery of the namespace, it becomes much easier to perform the switchover, assuming a global namespace is being used. All that the administrators need to do is run the following three cmdlets to get the DAG up and running again in New Jersey: Set the failed servers in the New York site as shown: Stop-DatabaseAvailabilityGroup <DAG_Name> -ActiveDirectorySite NewYork On the remaining DAG members, stop the Cluster service by running the following code line: Stop-Clussvc Activate the DAG members in New Jersey using the following code line: Restore-DatabaseAvailabilityGroup <DAG_Name> -ActiveDirectorySite NewJersey It is true that placing the witness server in the passive datacenter (when a DAG has the same number of nodes in both datacenters) would allow Exchange to automatically failover the DAG to the passive datacenter if the active site went down. However, there is a major disadvantage of doing this: if the passive site were to go down, even though it does not host any active databases, the entire DAG would go offline as the members in the active datacenter would not have quorum. This is why, in this scenario, it is recommended to always place the witness server in the active site. In order to prevent databases in New Jersey from being automatically mounted by Exchange, the Set-MailboxServer cmdlet can be used together with the DatabaseCopyAutoActivationPolicy parameter to specify the type of automatic activation on selected Mailbox servers. This parameter can be configured to any of the following values: Blocked: Prevents databases from being automatically activated on selected server(s). IntrasiteOnly: Only allows incoming mailbox database copies to be activated if the source server is on the same AD site, thus preventing cross-site activation or failover. Unrestricted: Allows mailbox database copies on selected server(s) from being activated independent of the location of the source database. This is the default value. For the preceding example, we would run the following cmdlet in order to prevent database copies from being automatically activated on MBX4: Set-MailboxServer MBX4 -DatabaseCopyAutoActivationPolicy Blocked As New York and New Jersey are on different AD sites, setting the DatabaseCopyAutoActivationPolicy parameter to IntrasiteOnly would achieve the same result. In either case, when performing a database switchover, administrators need to first remove the restriction on the target server, as shown in the following code, otherwise they will not be able to mount any databases. Set-MailboxServer MBX4 -DatabaseCopyAutoActivationPolicy Unrestricted Scenario 2 – active/active In this configuration, users' mailboxes are hosted across both datacenters. This is a very common scenario for deployments with a user population close to both locations. If Exchange fails for users in either of the datacenters, its services are activated on the other datacenter. Instead of simply having some active databases on the MBX4 server (refer to the preceding diagram), multiple DAG members are deployed in the New Jersey datacenter in order to provide protection against additional failures and additional capacity so it can support the entire user population in case the New York datacenter fails. By having more than one member in each datacenter, we are able to provide both intrasite and intersite resilience. Proper planning is crucial, especially capacity planning, so that each server is capable of hosting all workloads, including protocol request handling, processing, and data rendering from other servers without impacting the performance. In this example, the DAG is extended across both datacenters to provide site resilience for users on both sites. However, this particular scenario has a single point of failure: the network connection (most likely a WAN) between the datacenters. Remember that the majority of the voters must be active and able to talk to each other in order to maintain quorum. In the preceding diagram, the majority of voters are located in the New York datacenter, meaning that a WAN outage would cause a service failure for users whose mailboxes are mounted in New Jersey. This happens because when the WAN connection fails, only the DAG members in the New York datacenter are able to maintain the quorum. As such, servers in New Jersey will automatically bring their active database copies offline. In order to overcome this single point of failure, multiple DAGs should be implemented, with each DAG having a majority of voters in different datacenters, as shown in the following diagram: In this example, we would configure DAG1 with its witness server in New York and DAG2 with its witness server in New Jersey. By doing so, if a WAN outage happens, replication will fail, but all users will still have messaging services as both DAGs continue to retain quorum and are able to provide services to their local user population. Scenario 3 – third datacenter The third scenario is the only one to provide automatic DAG failover between datacenters. It involves splitting a DAG across two datacenters, as in the previous scenarios, with the difference that the witness server is placed in a third location. This allows it to be arbitrated by members of the DAG in both datacenters independent of the network state between the two sites. As such, it is a key to place the witness server in a location that is isolated from possible network failures that might affect either location containing the DAG members. This was fully supported in Exchange 2010, but the downside was that the solution itself would not get automatically failed over as the namespace would still need to be manually switched over. For this reason, it was not recommended. Going back to the advantage of the namespace in Exchange 2013 not needing to move with the DAG, this entire process now becomes automatic as shown in the following diagram: However, even though this scenario is now recommended, special consideration needs to be taken into account for when the network link between the two datacenters hosting Exchange mailboxes fail. For example, even though the DAG will continue to be fully operational on both datacenters, CASs in New York will not be able to proxy requests to servers in New Jersey. As such, proper planning is necessary in order to minimize user impact in such an event. One way of doing this is to ensure that DNS servers that are local to users only resolve the namespace to the VIP on the same site. This would cancel the advantages of single and global namespace, but as a workaround during an outage, it would reduce cross-site connections, ensuring users are not affected. Windows Azure Microsoft has been testing the possibility of placing a DAG's witness server in a Windows Azure IaaS (Infrastructure as a Service) environment. However, this infrastructure does not yet support the necessary network components to cater to this scenario. At the time of writing this book, Azure supports two types of networks: single site-to-site VPN (a network connecting two locations) and one or more point-to-site VPNs (a network connecting a single VPN client to a location). The issue is that in order for a server to be placed in Azure and configured as a witness server, two site-to-site VPNs would be required, connecting each datacenter hosting Exchange servers to Azure, which is not possible today. As such, the use of Azure to place a DAG's witness server is not supported at this stage. Using Datacenter Activation Coordination (DAC) DAC is a DAG setting that is disabled by default. It is used to control the startup behavior of databases. Let us suppose the following scenario: a DAG split across two datacenters, like the one shown in the first diagram of the section Scenario 2 – active/active, and the primary datacenter suffers a complete power outage. All servers and the WAN link are down, so a decision is made to activate the standby datacenter. Usually in such scenarios, WAN connectivity is not instantly restored when the primary datacenter gets its power back. When this happens, members of the DAG in the primary datacenter are powered up but are not able to communicate with other members in the standby datacenter that is currently active. As the primary datacenter contains most of the DAG quorum voters (or so it should), when the power is restored, the DAG members located in the primary datacenter have the majority; so, they have quorum. The issue with this is that with quorum, they can mount their databases (assuming everything required to do so is operational, such as storage), which causes discrepancy with the actual active databases mounted in the standby datacenter. So now we have the exact same databases mounted simultaneously in separate servers. This is commonly known as split brain syndrome. DAC was specifically created to prevent a split brain scenario. It does so through the Datacenter Activation Coordination Protocol (DACP) protocol. Once such a failure occurs, when the DAG is recovered, it will not automatically mount databases even if it has quorum. DACP is instead used to evaluate the DAG's current state and if databases should be mounted or not in each server by the Active Manager. Active Manager uses memory to store a bit (a 0 or a 1); so, the DAG knows if it can mount databases that are assigned as active on the local server. When DAC is enabled, every time the Active Manager is started, it sets the bit to 0, meaning it is not allowed to mount any databases. The server is then forced to establish communication with the other DAG members in order to get another server to tell it if it is allowed to mount its local databases or not. The answer from the other members is simply their bit setting in the DAG. If a server replies that it has its bit set to 1, the server is permitted to mount databases and also set its own bit to 1. On the other hand, when restoring the DAG in the preceding scenario, all the members of the DAG in the primary datacenter will have their DACP bit set to 0. As such, none of the servers powering up in the recovered primary datacenter are allowed to mount any databases because none of them are able to communicate with a server that has a DACP bit set to 1. Besides dealing for split brain scenarios, enabling DAC mode allows administrators to use the site resilience built-in cmdlets to carry out datacenter switchovers: Stop-DatabaseAvailabilityGroup Restore-DatabaseAvailabilityGroup Start-DatabaseAvailabilityGroup When DAC mode is disabled, both Exchange and cluster management tools need to be used when performing datacenter switchovers. Enabling the DAC mode DAC can only be enabled or disabled using the Set-DatabaseAvailabilityGroup cmdlet together with the DatacenterActivationMode parameter. To enable DAC mode, this parameter is set to DagOnly and to disable it, it is set to Off: Set-DatabaseAvailabilityGroup <DAG_Name> -DatacenterActivationMode DagOnly Deciding where to place witness servers When designing and configuring a DAG, it is important to consider the location of the witness server. As we have seen, this is very much dependent on the business requirements and what is available to the organization. As already discussed, Exchange 2013 allows scenarios that were not previously recommended, such as placing a witness server on a third location. The following table summarizes the general recommendations around the placement of the witness servers according to different deployment scenarios: Scenario Place Witness Server In... # DAGs # Datacenters 1 1 The datacenter where the DAG members are located. 1 2 The primary datacenter (refer to diagrams of section Scenario 1 –active/passive and Scenario 2 – active/active). A third location that is isolated from possible network failures that might affect either datacenter containing DAG members. 2+ 1 The datacenter where the DAG members are located. The same witness server can be used for multiple DAGs. A DAG member can be used as a witness server for another DAG. 2+ 2 The datacenter where the DAG members are located. The same witness server can be used for multiple DAGs. A DAG member can be used as a witness server for another DAG. A third location that is isolated from possible network failures that might affect either datacenter containing DAG members. 1 or 2+ 3+ The datacenter where administrators want the majority of voters to be. A third location that is isolated from possible network failures that might affect either datacenter containing DAG members. Summary Throughout this article, we explored all the great enhancements made to Exchange 2013 in regards to site resilience for the Mailbox server. As the recovery of a DAG is no longer tied together with the recovery of the client access namespace, each one of these components can be easily switched over between different datacenters without affecting the other component. This also allows administrators to place a witness server in a third datacenter in order to provide automatic failover for a DAG on either of the datacenters it is split across, something not recommended in Exchange 2010. Resources for Article: Further resources on this subject: Microsoft DAC 2012 [article] Choosing Lync 2013 Clients [article] Installing Microsoft Dynamics NAV [article]
Read more
  • 0
  • 0
  • 4608