Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Mastering Puppet
Mastering Puppet

Mastering Puppet: Mastering Puppet for network programming enables developers to pull the strings of Puppet and configure enterprise-level environments for optimum performance

eBook
$9.99 $28.99
Paperback
$47.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Mastering Puppet

Chapter 1. Dealing with Load/Scale

A large deployment will have a large number of nodes. If you are growing your installation from scratch, you may have started with a single Puppet master running the built-in WEBrick server and moved up to a passenger installation. At a certain point in your deployment, a single Puppet master just won't cut it—the load will become too great. In my experience, this limit was around 600 nodes. Puppet agent runs begin to fail on the nodes, and catalogs fail to compile. There are two ways to deal with this problem: divide and conquer or conquer by dividing.

That is, we can either split up our Puppet master and divide the workload among several machines or we can make each of our nodes apply our code directly using Puppet agent (this is known as a masterless configuration). We'll examine each of these solutions separately.

Divide and conquer

When you start to think about dividing up your Puppet server, the main thing to realize is that many parts of Puppet are simply HTTP SSL transactions. If you treat those things as you would a web service, you can scale out to any size required using HTTP load balancing techniques.

The first step in splitting up the Puppet master is to configure the Puppet master to run under passenger. To ensure we all have the same infrastructure, we'll install a stock passenger configuration together and then start tweaking the configuration. We'll begin building on an x86_64 Enterprise 6 rpm-based Linux; the examples in this book were built using CentOS 6.5 and Springdale Linux 6.5 distributions. Once we have passenger running, we'll look at splitting up the workload.

Puppet with passenger

In our example installation, we will be using the name puppet.example.com for our Puppet server. Starting with a server installation of Enterprise Linux version 6, we install httpd and mod_ssl using the following code:

# yum install httpd mod_ssl
Installed:
  httpd-2.2.15-29.el6_4.x86_64
  mod_ssl-2.2.15-29.el6_4.x86_64

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Note

In each example, I will install the latest available version for Enterprise Linux 6.5 and display the version for the package requested (some packages may pull in dependencies—those versions are not shown).

To install mod_passenger, we pull in the Extra Packages for Enterprise Linux (EPEL) repository available at https://fedoraproject.org/wiki/EPEL. Install the EPEL repository by downloading the rpm file from http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html or use the following code:

# yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Installed:
  epel-release-6-8.noarch

Once EPEL is installed, we install mod_passenger from that repository using the following code:

# yum install mod_passenger
Installed:
  mod_passenger-3.0.21-5.el6.x86_64

Next, we will pull in Puppet from the puppetlabs repository available at http://docs.puppetlabs.com/guides/puppetlabs_package_repositories.html#for-red-hat-enterprise-linux-and-derivatives using the following code:

# yum install http://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm
Installed:
  puppetlabs-release-6-7.noarch

With the puppetlabs repository installed, we can then install Puppet using the following command:

# yum install puppet
Installed:
  puppet-3.3.2-1.el6.noarch

The Puppet rpm will create the /etc/puppet and /var/lib/puppet directories. In /etc/puppet, there will be a template puppet.conf; we begin by editing that file to set the name of our Puppet server (puppet.example.com) in the certname setting using the following code:

[main]
  logdir = /var/log/puppet
  rundir = /var/run/puppet
  vardir = /var/lib/puppet
  ssldir = $vardir/ssl
  certname = puppet.example.com
  [agent]
  server = puppet.example.com
  classfile = $vardir/classes.txt
  localconfig = $vardir/localconfig

The other lines in this file are defaults. At this point, we would expect puppet.example.com to be resolved with a DNS query correctly, but if you do not control DNS at your organization or cannot have this name resolved properly at this point, edit /etc/hosts, and put in an entry for your host pointing to puppet.example.com. In all the examples, you would substitute example.com for your own domain name.

127.0.0.1   localhost localhost.localdomain puppet 
  puppet.example.com

We now need to create certificates for our master; to ensure the Certificate Authority (CA) certificates are created, run Puppet cert list using the following command:

# puppet cert list
Notice: Signed certificate request for ca

In your enterprise, you may have to answer requests from multiple DNS names, for example, puppet.example.com, puppet, and puppet.devel.example.com. To make sure our certificate is valid for all those DNS names, we will pass the dns-alt-names option to puppet certificate generate; we also need to specify that the certificates are to be signed by the local machine using the following command:

puppet# puppet certificate generate --ca-location local --dns-alt-names puppet,puppet.prod.example.com,puppet.dev.example.com puppet.example.com
Notice: puppet.example.com has a waiting certificate request
true

Now, to sign the certificate request, first verify the certificate list using the following commands:

puppet# puppet cert list
  "puppet.example.com" (SHA256) E5:F7:26:0A:6C:41:26:FA:80:02:E5:A6:A1:DB:F4:E0:9D:9C:5B:2D:A5:BF:EC:D1:FA:84:51:F4:8C:FD:9B:AF (alt names: "DNS:puppet", "DNS:puppet.dev.example.com", "DNS:puppet.example.com", "DNS:puppet.prod.example.com")
puppet# puppet cert sign puppet.example.com
Notice: Signed certificate request for puppet.example.com
Notice: Removing file Puppet::SSL::CertificateRequest puppet.example.com at '/var/lib/puppet/ssl/ca/requests/puppet.example.com.pem'

Tip

We specified the ssldir directive in our configuration. To interactively determine where the certificates will be stored using the following command line:

$ puppet config print ssldir

One last task is to copy the certificate that you just signed into certs by navigating to /var/lib/puppet/ssl/certs. You can use Puppet certificate find to do this using the following command:

# puppet certificate find puppet.example.com --ca-location local
-----BEGIN CERTIFICATE-----
MIIF1TCCA72gAwIBAgIBAjANBgkqhkiG9w0BAQsFADAoMSYwJAYDVQQDDB1QdXBw
...
-----END CERTIFICATE-----

When you install Puppet from the puppetlabs repository, the rpm will create an Apache configuration file called apache2.conf. Locate this file and copy it into your Apache configuration directory using the following command:

# cp /usr/share/puppet/ext/rack/example-passenger-vhost.conf /etc/httpd/conf.d/puppet.conf

We will now show the Apache config file and point out the important settings using the following configuration:

PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

The preceding lines of code configure passenger for performance. PassengerHighPerformance turns off some compatibility that isn't required. The other options are tuning parameters. For more information on these settings, see http://www.modrails.com/documentation/Users%20guide%20Apache.html.

Next we will need to modify the file to ensure it points to the newly created certificates. We will need to edit the lines for SSLCertificateFile and SSLCertificateKeyFile. The other SSL file settings should point to the correct certificate, chain, and revocation list files as shown in the following code:

Listen 8140
<VirtualHost *:8140>
  ServerName puppet.example.com
  SSLEngine on
  SSLProtocol -ALL +SSLv3 +TLSv1
  SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP

  SSLCertificateFile /var/lib/puppet/ssl/certs/puppet.example.com.pem
  SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet.example.com.pem
  SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
  SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem
  # If Apache complains about invalid signatures on the CRL, you can try disabling
  # CRL checking by commenting the next line, but this is not recommended.
  SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem
  SSLVerifyClient optional
  SSLVerifyDepth 1
  # The `ExportCertData` option is needed for agent certificate expiration warnings
  SSLOptions +StdEnvVars +ExportCertData
  RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
  RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
  RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

  DocumentRoot /etc/puppet/rack/public/
  RackBaseURI /
<Directory /etc/puppet/rack/>
  Options None
  AllowOverride None
  Order allow,deny
  allow from all
</Directory>
</VirtualHost>

In this VirtualHost we listen on 8140 and configure the SSL certificates in the SSL lines. The RequestHeader lines are used to pass certificate information to the Puppet process spawned by passenger. The DocumentRoot and RackBaseURI settings are used to tell passenger where to find its configuration file config.ru. We create /etc/puppet/rack and it's subdirectories and then copy the example config.ru into that directory using the following commands:

# mkdir -p /etc/puppet/rack/{public,tmp}
# cp /usr/share/puppet/ext/rack/files/config.ru /etc/puppet/rack
# chown puppet:puppet /etc/puppet/rack/config.ru

We change the owner of config.ru to puppet:puppet as the passenger process will run as the owner of config.ru. Our config.ru will contain the following code:

$0 = "master"

# if you want debugging:
# ARGV << "--debug"

ARGV << "--rack"
ARGV << "--confdir" << "/etc/puppet"
ARGV << "--vardir"  << "/var/lib/puppet"

require 'puppet/util/command_line'
run Puppet::Util::CommandLine.new.execute

Tip

In this example, we have used the repository rpms supplied by Puppet and EPEL. In a production installation, you would use reposync to copy these repositories locally so that your Puppet machines do not need to access the Internet directly.

The config.ru file sets the command-line arguments for Puppet. The ARGV lines are used to set additional parameters to the puppet process. As noted in the Puppet master main page, any valid configuration parameter from puppet.conf can be specified as an argument here. Only the options that affect where Puppet will look for files should be specified here. Once puppet knows where to find puppet.conf, adding arguments here could be confusing.

With this configuration in place, we are ready to start Apache as our Puppet master. Simply start Apache with a service httpd start.

Tip

SELinux

Security Enhanced Linux (SELinux) is a system for Linux that provides support for mandatory access controls (MAC). If your servers are running with SELinux enabled, great! You will need to make some policy changes to allow Puppet to work within passenger. The easiest way to build up your policy is to use audit2allow, which is provided in policycoreutils-python. Rotate the audit logs to get a clean log file, and then start a Puppet run. After the Puppet run, get audit2allow to build a policy module for you and insert it. Then turn SELinux back on. Refer to https://bugzilla.redhat.com/show_bug.cgi?id=1051461 for more information.

# setenforce 0 
# service auditd rotate
# service httpd restart
(start a puppet run remotely)
# audit2allow -i /var/log/audit/audit.log -M puppet_passenger
# semodule -i puppet_passenger.pp
# setenforce 1

If necessary, repeat the process until everything runs cleanly. semodule will sometimes suggest enabling the allow_ypbind Boolean; this is a very bad idea. The allow_ypbind Boolean allows so many things that it is almost as bad as turning SELinux off.

Now that Puppet is running, you'll need to open the local firewall (iptables) on port 8140 to allow your nodes to connect. Then you'll need an example site.pp to get started. For testing we will create a basic site.pp that defines a default node with a single class attached to the default node as shown in the following code:

node default {
  include example
}

class example {
  notify {"This is an example": }
}

You can start a practice node or two and run their agent against the Puppet server either using --server puppet.example.com or editing the agents puppet.conf file to point at your server. Agents will by default look for an unqualified host called Puppet. Then search based on your DNS configuration (search in /etc/resolv.conf), and if you do not control DNS, you may have to edit the local /etc/hosts file to specify the IP address of your Puppet master. A sample run, for a node called node1, should look something like the following commands:

[root@node1 ~]# puppet agent -t
Info: Creating a new SSL key for node1
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for node1
Info: Certificate Request fingerprint (SHA256): C4:0D:7A:54:ED:C8:E8:CC:68:D0:A6:13:C4:91:28:3D:B1:66:71:48:57:85:D8:99:AF:D0:81:54:B9:64:AB:F2
Exiting; no certificate found and waitforcert is disabled

Sign the certificate on the Puppet master and run again; the run should look like the following commands:

[root@puppet ~]# puppet cert sign node1
Notice: Signed certificate request for node1
Notice: Removing file Puppet::SSL::CertificateRequest node1 at '/var/lib/puppet/ssl/ca/requests/node1.pem'

[root@node1 ~]# puppet agent -t
Info: Caching certificate for node1
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for node1
Info: Applying configuration version '1386310193'
Notice: This is an example
Notice: /Stage[main]/Example/Notify[This is an example]/message: defined 'message' as 'This is an example'
Notice: Finished catalog run in 0.03 seconds

You now have a working passenger configuration. This configuration can handle a much larger load than the default WEBrick server provided with puppet. Puppet Labs suggests the WEBrick server is appropriate for small installations; in my experience that number is much less than 100 nodes, maybe even less than 50. You can tune the passenger configuration and handle a large number of nodes, but to handle a very large installation (1000s of nodes), you'll need to start splitting up the workload.

Splitting up the workload

Puppet is a web service. But there are several different components supporting that web service, as shown in the following diagram:

Splitting up the workload

Each of the different components in your Puppet infrastructure: SSL CA, Reporting, Storeconfigs, and Catalog compilation can be split up into their own server or servers.

Certificate signing

Unless you are having issues with certificate signing consuming too many resources, it's simpler to keep the signing machine a single instance, possibly with a hot spare. Having multiple certificate signing machines means that you have to keep certificate revocation lists synchronized.

Reporting

Reporting should be done on a single instance if possible. Reporting options will be shown in Chapter 7, Reporting and Orchestration.

Storeconfigs

Storeconfigs should be run on a single server, storeconfigs allows for exported resources and is optional. The recommended configuration for storeconfigs is puppetdb, which can handle several thousand nodes in a single installation.

Catalog compilation

Catalog compilation is the one task that can really bog down your Puppet installation. Splitting compilation among a pool of workers is the biggest win for scaling your deployment. The idea here is to have a primary point of contact for all your nodes—the Puppet master. Then, using proxying techniques, the master will direct requests to specific worker machines within your Puppet infrastructure. From the perspective of the nodes checking into the Puppet master, all the interaction appears to come from the main proxy machine.

To understand how we are going to achieve this load balancing, we first need to look at how the agents request data from our Puppet master. The request URL sent to our Puppet master has the format https://puppetserver:8140/environment/resource/key. The "environment" in the request URL is the Puppet environment in use by the node. It defaults to production but can be other values as we will see in later chapters. The resource being requested can be any of the accepted REST API calls, such as: catalog, certificate, resource, report, file_metadata, or file_content. A complete listing of the http_api is available at http://docs.puppetlabs.com/guides/rest_api.html.

Requests from nodes to the Puppet masters follow a pattern that we can use to configure our proxy machine. The pattern is as follows:

/environment/resource/key

For example, when node1.example.com requests its catalog in the production environment, it connects to the server and requests the following (using URL encoding):

https://puppet.example.com:8140/production/catalog/node1.example.com.

Knowing that there is a pattern to the requests, we can configure Apache to redirect requests based on regular expression matches to different machines in our Puppet infrastructure.

Our first step in splitting up our load will be to clone our Puppet master server twice to create two new worker machines, which we will call worker1.example.com and worker2.example.com. In this example, we will use 192.168.100.101 for worker1 and 192.168.100.102 for worker2. Create a private network for all the Puppet communication on 192.168.100.0/24. Our Puppet master will use the address 192.168.100.100. It is important to create a private network for the worker machines as our proxy configuration removes the SSL encryption, which means that communication between the workers and the master proxy machine is unencrypted.

Our new Puppet infrastructure is shown in the following diagram:

Catalog compilation

On our Puppet server, we will change the Apache puppet.conf as follows. Instead of listening on 8140, we will listen on 18140, and importantly, only listen on our private network as this traffic will be unencrypted. Next, we will not enable SSL on 18140. And finally we will remove any header settings we were making in our original file as shown in the following configuration:

PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

Listen 127.0.0.1:18140
Listen 192.168.100.100:18140

<VirtualHost *:18140>
  ServerName puppet.example.com
  DocumentRoot /etc/puppet/rack/public/
  RackBaseURI /
  <Directory /etc/puppet/rack/>
    Options None
    AllowOverride None
    Order allow,deny
    allow from all
  </Directory>
</VirtualHost>

The configuration for this VirtualHost is much simpler. Now, on the worker machines, create /etc/httpd/conf.d/puppet.conf files that are identical to the previous files but have different Listen directives shown as follows:

  • On worker1:
    Listen 192.168.100.101:18140
    
  • On worker2:
    Listen 192.168.100.102:18140
    

Remember to open port 18140 on the worker machines' firewalls (iptables) and start httpd.

Returning to the Puppet master machine, create a proxy.conf file in the Apache conf.d directory (/etc/httpd/conf.d) to point at the workers. We will create two proxy pools. The first is for certificate signing, called puppetca, as shown in the following configuration:

<Proxy balancer://puppetca>
BalancerMember http://127.0.0.1:181
40
</Proxy>

A second proxy pool is for catalog compilation, called puppetworker, as shown in the following configuration:

<Proxy balancer://puppetworker>
BalancerMember http://192.168.100.102:181
40
BalancerMember http://192.168.100.101:181
40
</Proxy>

Next recreate the Puppet VirtualHost listener for 8140 with the SSL and certificate information used previously, as shown in the following configuration:

LoadModule ssl_module modules/mod_ssl.so

Listen 8140
<VirtualHost *:8140>
ServerName puppet.example.com
       SSLEngine on
       SSLProtocol -ALL +SSLv3 +TLSv1
       SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-
       EXP
       SSLCertificateFile 
       /var/lib/puppet/ssl/certs/puppet.example.com.pem
       SSLCertificateKeyFile 
  /var/lib/puppet/ssl/private_keys/puppet.example.com.pem
       SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
       SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
       # If Apache complains about invalid signatures on the CRL, you can try disabling
       # CRL checking by commenting the next line, but this is not recommended.
       SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem
       SSLVerifyClient optional
       SSLVerifyDepth  1
       # The `ExportCertData` option is needed for agent certificate expiration warnings
       SSLOptions +StdEnvVars +ExportCertData
       # This header needs to be set if using a loadbalancer or proxy
       RequestHeader unset X-Forwarded-For
       RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
       RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
       RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

Since we know that we want all certificate requests going to the puppetca balancer, we use ProxyPassMatch to match URLs that have a certificate as the second phrase following the environment as shown in the next configuration. Our regular expression searches for a single word followed by /certificate.*, and any match is sent to our puppetca balancer.

ProxyPassMatch ^/([^/]+/certificate.*)$ balancer://puppetca/$1

The only thing that remains is to send all noncertificate requests to our load balancing pair, worker1 and worker2, as shown in the following configuration:

ProxyPass / balancer://puppetworker/
ProxyPassReverse / balancer://puppetworker
</VirtualHost>

At this point, we can restart Apache on the Puppet master.

Tip

SELinux

You'll need to allow Puppet to bind to port 18140 at this point since the default puppet SELinux module allows for 8140 only. You will also need to allow Apache to connect to the worker instances; there is a Boolean for that, httpd_can_network_connect.

Now, when a node connects, if it requests for a certificate, it will be redirected to the VirtualHost on port 18140 on the Puppet master. If the node requests a catalog, it will be redirected to one of the worker nodes. To convince yourself that this is the case, edit /etc/puppet/manifests/site.pp on your worker1 node and insert notify as shown in the following configuration:

node default {
  include example
  notify {'Compiled on worker1': }
}

Do the same on worker2 with the message Compiled on worker2, run puppet agent again on your node, and see where the catalog is being compiled using the following commands:

[root@node1 ~]# puppet agent –t
Info: Retrieving plugin
Info: Caching catalog for node1
Info: Applying configuration version '1386312527'
Notice: Compiled on worker1
Notice: /Stage[main]//Node[default]/Notify[Compiled on worker1]/message: defined 'message' as 'Compiled on worker1'
Notice: This is an example
Notice: /Stage[main]/Example/Notify[This is an example]/message: defined 'message' as 'This is an example'
Notice: Finished catalog run in 0.10 seconds

Tip

You may see "Compiled on worker2", which is expected.

To verify that certificates are being handled properly, clean the certificate for your example node, remove it from the node, and restart the agent.

  • On the master:
    master# puppet cert clean node1
    
  • On the node:
    node1# \rm -r /var/lib/puppet/ssl/*
    node1# puppet agent -t
    

Tip

Alternatively to this configuration, you could use the puppetca setting in puppet.conf on your nodes to get clients to use a specific machine for signing requests.

Since this is an enterprise installation, we should have a dashboard of some kind running to collect reports from workers.

Tip

If your reports setting on the master is either HTTP or puppetdb, then this section won't affect you.

We'll clone our worker again to make a new server called reports (192.168.100.103), which will collect our reports. We then have to add another line to our Apache proxy.conf configuration file to use the new server, and we need to place this line directly after the certificate proxy line. Since reports must all be sent to the same machine to be useful, we won't use a balancer line as before, and we will simply set the proxy to the address of the reports machine directly.

ProxyPassMatch ^/([^/]+/certificate.*)$ balancer://puppetca/$1
ProxyPassMatch ^/([^/]+/report
/.*)$ http://192.168.100.103/$1
ProxyPass / balancer://puppetworker/

Keep the /etc/httpd/conf.d/proxy.conf balancer section updated to send reports to 192.168.100.103.

Again, restart Apache and make sure that report=true is set on the node in the [agent] section of puppet.conf. Run Puppet agent on the node, and verify that the report gets sent to 192.168.100.103 (look in /var/lib/puppet/reports/).

Tip

If you are still seeing problems with client catalog compilation timeouts after creating multiple catalog workers, it may be that your client is timing out the connection before the worker has a chance to compile the catalog. Try experimenting with the configtimeout parameter in the [agent] section of puppet.conf

configtimeout=300

Setting this higher may resolve your issue. You will need to change the ProxyTimeout directive in the proxy.conf configuration for Apache as well. This will be revisited in Chapter 10, Troubleshooting.

Keeping the code consistent

At this point, we are able to scale out our catalog compilation to as many servers as we need, but we've neglected one important thing: we need to make sure that the Puppet code on all the workers remains in sync. There are a few ways we can do this, and when we cover integration with Git in Chapter 3, Git and Environments, we will see how to use Git to distribute the code.

Rsync

A simple way to distribute the code is with rsync; this isn't the best solution, but just for example, you will need to run rsync whenever you change the code. This will require changing the Puppet user's shell from /sbin/nologin to /bin/bash or /bin/rbash, which is a potential security risk.

Tip

If your puppet code is on a filesystem that supports ACLs, then creating an rsync user and giving that user rights to that filesystem is a better option. Using setfacl, it is possible to grant write access to the filesystem for a user other than Puppet.

First we create an ssh-key for rsync to use to ssh between the worker nodes and the master. We then copy the key into the authorized_keys file of the Puppet user on the workers using the ssh-copy-id command as follows:

puppet# ssh-keygen -f puppet_rsync
(creates puppet_rsync.pub puppet_rsync)

worker1# mkdir /var/lib/puppet/.ssh
# cp puppet_rsync.pub /var/lib/puppet/.ssh/authorized_keys
# chown -R puppet:puppet /var/lib/puppet/.ssh
# chmod 700 /var/lib/puppet/.ssh
# chmod 600 /var/lib/puppet/.ssh/authorized_keys
# chsh -s /bin/bash puppet

puppet# rsync -e 'ssh -i puppet_rsync' -az /etc/puppet/ puppet@worker1:/etc/puppet

Tip

Creating SSH Keys and using rsync

The trailing slash on the first part /etc/puppet/ and the absence of the slash on the second part, puppet@worker1:/etc/puppet is by design. That way, we get the contents of /etc/puppet on the master placed into /etc/puppet on the worker.

Using rsync is not a good enterprise solution, and the concept of using SSH Keys and transferring the files as the Puppet user is the important part of this method.

NFS

A second option to keep the code consistent is to use NFS. If you already have an NAS appliance, then using the NAS to share out the Puppet code may be the simplest solution. If not, using the Puppet master as an NFS server is another, but this does make your Puppet master a big, single point of failure. NFS is not the best solution to this sort of problem.

Clustered filesystem

Using a clustered filesystem such as gfs2 or glusterfs is a good way to maintain consistency between nodes. This also removes the problem of the single point of failure with NFS.

Git

A third option is to have your version control system keep the files in sync with a post-commit hook or scripts that call Git directly, such as r10k or puppet-sync. We will cover how to configure Git to do some housekeeping for us in a later chapter. Using Git to distribute the code is a popular solution since it only updates the code when a commit is made, the continuous delivery model. If your organization would rather push code at certain points, then using the scripts mentioned earlier on a routine basis is the solution I would suggest.

One more split

Now that we have our Puppet infrastructure running on two workers and the master, you might notice that the main Apache virtual machine need not be on the same machine as the certificate-signing machine. At this point, there is no need to run passenger on that main gateway machine, and you are open to use whatever load balancing solution you see fit. In this example I will be using nginx as the main proxy point.

Tip

Using nginx is not required, but you may wish to use nginx as the proxy machine. This is because nginx has more configuration options for its proxy module, such as redirecting based on client IP address.

The important thing to remember here is that we are just providing a web service. We'll intercept the SSL part of the communication with nginx and then forward it onto our worker and CA machines as necessary. Our configuration will now look like the following diagram:

One more split

We will start with a blank machine this time; we do not need to install passenger or Puppet on the machine. To make use of the latest SSL-handling routines, we will download nginx from the nginx repository.

# yum install http://nginx.org/packages/rhel/6/noarch/RPMS/nginx-release-rhel-6-0.el6.ngx.noarch.rpm
Installed:
  nginx-release-rhel.noarch 0:6-0.el6.ngx
# yum install nginx
Installed:
  nginx-1.4.4-1.el6.ngx.x86_64

Now we need to copy the SSL CA files from the Puppet master to this gateway using the following commands:

puppet# scp /var/lib/puppet/ssl/ca/ca_crl.pem gateway:/etc/nginx
puppet# scp /var/lib/puppet/ssl/ca/ca_crt.pem gateway:/etc/nginx
puppet# scp /var/lib/puppet/ssl/certs/puppet.example.com.pem gateway:/etc/nginx
puppet# scp /var/lib/puppet/ssl/private_keys/puppet.example.com.pem gateway:/etc/nginx/puppet.example.com.key

Now we need to create a gateway configuration for nginx, which we will place in /etc/ngninx/conf.d/puppet-proxy.conf

We will define the two proxy pools as we did before, but using nginx syntax this time.

upstream puppetca {
  server 192.168.100.100:18140;
}

upstream puppetworkers {
  server 192.168.100.101:8140;
  server 192.168.100.102:8140;
}

Next, we create a server stanza, specifying that we handle the SSL connection, and we need to set some headers before passing on the communication to our proxied servers.

server {
  listen 8140 ssl;
  server_name puppet.example.com;

  default_type application/x-raw;

  ssl on;
  ssl_certificate    puppet.example.com.pem;
  ssl_certificate_key  puppet.example.com.key;
  ssl_trusted_certificate  ca_crt.pem;
  ssl_crl      ca_crl.pem;

  ssl_session_cache  shared:SSL:5m;
  ssl_session_timeout  5m;

  ssl_protocols    SSLv2 SSLv3 TLSv1;
  ssl_ciphers  
  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
  ssl_prefer_server_ciphers on;
  ssl_verify_client optional_no_ca;

Setting ssl_verify_client to optional_no_ca is important, since on the first connection, the client will not have a signed certificate, so we need to accept all connections but mark a header with the verification status.

  proxy_set_header  Host      $host;
  proxy_set_header  X-Real-IP  $remote_addr;
  proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header  X-Client-Verify  $ssl_client_verify;
  proxy_set_header  X-Client-DN    $ssl_client_s_dn;
  proxy_set_header  X-SSL-Subject    $ssl_client_s_dn;
  proxy_set_header   X-SSL-Issuer    $ssl_client_i_dn;
  proxy_read_timeout  1000;

The header X-Client-Verify will hold success or failure at this point, so our Puppet master will know if the certificate is valid. Now we need to look for certificate requests and hand those off to the puppetca pool:

location ~* ^/.*/certificate {
  proxy_pass http://puppetca;
  proxy_redirect off;
  proxy_read_timeout 1000;
}

Then we can send all other requests to our worker pool

location / {
  proxy_pass http://puppetworkers;
  proxy_redirect off;
  proxy_read_timeout 1000;
}

Now we need to start nginx on the gateway machine, open up port 8140 on the firewall, and open up 18140 on the Puppet master firewall (gateway will now need to communicate with that port).

Running puppet again on your node will now produce the same results as before, but you are now able to leverage the load balancing of nginx over that of Apache.

Tip

You will need to synchronize the SSL CA Certificate Revocation List (CRL) from the Puppet master to the gateway machine. Without synchronization, the keys that are removed from the Puppet master will not be revoked on the gateway machine.

One last split or maybe a few more

We have already split our workload into a certificate-signing machine (the master or puppetca), a pool of catalog machines, and a report-gathering machine. What is interesting as an exercise at this point is that we can also serve files up using our gateway machine.

Based on what we know about the puppet HTTP API, we know that requests for file_buckets, and files have specific URIs that we can serve directly from nginx without using passenger or Apache or even puppet. To test the configuration, alter the definition of the example class to include a file as follows:

class example {
  notify { 'This is an example': }
  file {'/tmp/example':
    mode => 644, 
    owner => 100,
    group => 100,
    source => 'puppet:///modules/example/example',
  }
}

Create the example file in /etc/puppet/modules/example/files/example.

This file lives on the workers. On the gateway machine, rsync your Puppet module code from the workers into /var/lib/nginx/puppet. Now, to prove that the file is coming from the gateway, edit the example file after you run the rsync.

The /etc/puppet/modules/example/files/example file lives on the gateway. At this point, we can start serving up files from nginx by putting in a location clause as follows; we will do two stanzas, one for files outside modules and the other for module-provided files at /etc/nginx/conf.d/gateway.conf.

location ~* ^/.*/file_content/modules {
  rewrite ^/([^/]+)/file_content/modules/([^/]+)/(.*) /$2/files/$3;
  break;
  root /var/lib/nginx/puppet/modules/;
}
location ~* ^/.*/file_content/ {
  rewrite ^/([^/]+)/file_content/([^/]+)/(.*) /$2/files/$3;
  break;
  root /var/lib/nginx/puppet/;
}

Restart nginx on the gateway machine, and then run Puppet on the node using the following command:

[root@node1 ~]# puppet agent –t

Notice: /Stage[main]/Example/File[/tmp/example]/ensure: defined content as '{md5}c83849f23a139c41edfbcd8473a81ac1'

Notice: Finished catalog run in 0.16 seconds
[root@node1 ~]# cat /tmp/example
This file lives on the gateway

As we can see, although the file living on the workers has the contents "This file lives on the workers," our node is getting the file directly from nginx on the gateway.

Tip

Our node will keep changing /tmp/example to the same file each time because the catalog is compiled on the worker machine with contents different from those of the gateway. In a production environment, all the files would need to be synchronized.

One important thing to consider is security, as any configured client can retrieve files from our gateway machine. In production, you would want to add ACLs to the file location.

As we have seen, once the basic proxying is configured, further splitting up of the workload becomes a routine task. We can split the workload to scale to handle as many nodes as we require.

Conquer by dividing

Depending on the size of your deployment and the way you connect to all your nodes, a masterless solution may be a good fit. In a masterless configuration, you don't run the Puppet agent; rather, you push the Puppet code to a node, and then run Puppet apply. There are a few benefits to this method and a few drawbacks.

Benefits

Drawbacks

No single point of failure

Can't use built-in reporting tools such as dashboard.

Simpler configuration

Exported resources requires nodes have write access to the database.

Finer-grained control on where code is deployed

Each node has access to all the code

Multiple simultaneous runs do not affect each other (reduces contention)

More difficult to know when a node is failing to apply catalog correctly

Connection to Puppet master not required (offline possible)

No certificate management

No certificate management

The idea with a masterless configuration is that you distribute the Puppet code to each node individually and then kick off a puppet run to apply that code. One of the benefits of Puppet is that it keeps your system in a known good state, so when choosing masterless it is important to build your solution with this in mind. A cron job configured by your deployment mechanism that can apply Puppet to the node on a routine schedule will suffice.

The key parts of a masterless configuration are: distributing the code, pushing updates to the code, and ensuring the code is applied routinely to the nodes. Pushing a bunch of files to a machine is best done with some sort of package management.

Tip

Many masterless configurations use Git to have clients pull the files, this has the advantage of clients pulling changes.

For Linux systems, the big players are rpm and dpkg, whereas for MacOS, Installer package files can be used. It is also possible to configure the nodes to download the code themselves from a web location. Some large installations use Git to update the code as well.

The solution I will outline is that of using an rpm deployed through yum to install and run Puppet on a node. Once deployed, we can have the nodes pull updated code from a central repository rather than rebuild the rpm for every change.

Creating an rpm

To start our rpm, we will make an rpm spec file, we can make this anywhere since we don't have a master in this example. Start by installing rpm-build, which will allow us to build the rpm.

# yum install rpm-build
Installing
  rpm-build-4.8.0-37.el6.x86_64

It will be important later to have a user to manage the repository, so create a user called builder at this point. We'll do this on the Puppet master machine we built earlier. Create an rpmbuild directory with the appropriate subdirectories, and then create our example code in this location.

# sudo -iu builder
$ mkdir -p rpmbuild/{SPECS,SOURCES}
$ cd SOURCES
$ mkdir -p modules/example/manifests
$ cat <<EOF>modules/example/manifests/init.pp
class example {
notify {"This is an example.": }
file {'/tmp/example':
mode => '0644',
owner => '0',
group => '0',
content => 'This is also an example.'
}
}
EOF
$ tar cjf example.com-puppet-1.0.tar.bz2 modules

Next, create a spec file for our rpm in rpmbuild/SPECS as shown in the following commands:

Name:           example.com-puppet
Version: 1.0
Release: 1%{?dist}
Summary: Puppet Apply for example.com

Group: System/Utilities
License: GNU
Source0: example.com-puppet-%{version}.tar.bz2
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)

Requires: puppet
BuildArch:      noarch

%description
This package installs example.com's puppet configuration
and applies that configuration on the machine.


%prep

%setup -q -c
%install
mkdir -p $RPM_BUILD_ROOT/%{_localstatedir}/local/puppet
cp -a . $RPM_BUILD_ROOT/%{_localstatedir}/local/puppet

%clean
rm -rf %{buildroot}

%files
%defattr(-,root,root,-)
%{_localstatedir}/local/puppet

%post
# run puppet apply
/bin/env puppet apply --logdest syslog --modulepath=%{_localstatedir}/local/puppet/modules %{_localstatedir}/local/puppet/manifests/site.pp 

%changelog
* Fri Dec 6 2013 Thomas Uphill <thomas@narrabilis.com> - 1.0-1
- initial build

Then use rpmbuild to build the rpm based on this spec, as shown in the following command:

$ rpmbuild -ba example.com-puppet.spec

Wrote: /home/builder/rpmbuild/SRPMS/example.com-puppet-1.0-1.el6.src.rpm
Wrote: /home/builder/rpmbuild/RPMS/noarch/example.com-puppet-1.0-1.el6.noarch.rpm

Now, deploy a node and copy the rpm onto that node. Verify that the node installs Puppet and then does a Puppet apply run.

# yum install example.com-puppet-1.0-1.el6.noarch.rpm 
Loaded plugins: downloadonly

Installed:
  example.com-puppet.noarch 0:1.0-1.el6 
Dependency Installed:
  augeas-libs.x86_64 0:1.0.0-5.el6
...
  puppet-3.3.2-1.el6.noarch

Complete!

Verify that the file we specified in our package has been created by using the following command:

# cat /tmp/example
This is also an example.

Now, if we are going to rely on this system of pushing Puppet to nodes, we have to make sure we can update the rpm on the clients and we have to ensure that the nodes still run Puppet regularly so as to avoid configuration drift (the whole point of Puppet). There are many ways to accomplish these two tasks. We can put the cron definition into the post section of our rpm:

%post
# install cron job
/bin/env puppet resource cron 'example.com-puppet' command='/bin/env puppet apply --logdest syslog --modulepath=%{_localstatedir}/local/puppet/modules %{_localstatedir}/local/puppet/manifests/site.pp' minute='*/30' ensure='present'

We could have a cron job be part of our site.pp, as shown in the following command:

cron { 'example.com-puppet':
  ensure      => 'present',
  command => '/bin/env puppet apply --logdest syslog --modulepath=/var/local/puppet/modules /var/local/puppet/manifests/site.pp',
  minute  => ['*/30'],
  target   => 'root',
  user  => 'root',
}

To ensure the nodes have the latest version of the code, we can define our package in the site.pp.

package {'example.com-puppet':  ensure => 'latest' }

In order for that to work as expected, we need to have a yum repository for the package and have the nodes looking at that repository for packages.

Creating the YUM repository

Creating a YUM repository is a very straightforward task. Install the createrepo rpm and then run createrepo on each directory you wish to make into a repository.

# mkdir /var/www/html/puppet
# yum install createrepo

Installed:
 createrepo.noarch 0:0.9.9-18.el6   
# chown builder /var/www/html/puppet
# sudo -iu builder
$ mkdir /var/www/html/puppet/{noarch,SRPMS}
$ cp /home/builder/rpmbuild/RPMS/noarch/example.com-puppet-1.0-1.el6.noarch.rpm /var/www/html/puppet/noarch
$ cp rpmbuild/SRPMS/example.com-puppet-1.0-1.el6.src.rpm /var/www/html/puppet/SRPMS
$ cd /var/www/html/puppet
$ createrepo noarch
$ createrepo SRPMS

Our repository is ready, but we need to export it with the web server to make it available to our nodes. This rpm contains all our Puppet code, so we need to ensure that only the clients we wish get access to the files. We'll create a simple listener on port 80 for our Puppet repository

Listen 80
<VirtualHost *:80>
  DocumentRoot /var/www/html/puppet
</VirtualHost>

Now, the nodes need to have the repository defined on them so they can download the updates when they are made available via the repository. The idea here is that we push the rpm to the nodes and have them install the rpm. Once the rpm is installed, the yum repository pointing to updates is defined and the nodes continue updating themselves.

yumrepo { 'example.com-puppet':
  baseurl  => 'http://puppet.example.com/noarch',
  descr    => 'example.com Puppet Code Repository',
  enabled  => '1',
  gpgcheck => '0',
}

So to ensure that our nodes operate properly, we have to make sure of the following things:

  • Install code
  • Define repository
  • Define cron job to run Puppet apply routinely
  • Define package with latest tag to ensure it is updated

A default node in our masterless configuration requires that the cron task and the repository be defined. If you wish to segregate your nodes into different production zones (such as development, production, and sandbox), I would use a repository management system like Pulp. Pulp allows you to define repositories based on other repositories and keeps all your repositories consistent.

Tip

You should also setup a gpg key on the builder account that can sign the packages it creates. You would then distribute the gpg public key to all your nodes and enable gpgcheck on the repository definition.

Summary

Dealing with scale is a very important task in enterprise deployments. As your number of nodes increases beyond the proof-of-concept stage (> 50 nodes), the simple WEBrick server cannot be used. In the first section, we configured a Puppet master with passenger to handle a larger load. We then expanded that configuration with load balancing and proxying techniques realizing that Puppet is simply a web service. Understanding how nodes request files, catalogs, and certificates allows you to modify the configuration and bypass or alleviate bottlenecks.


In the last section, we explored masterless configuration, wherein instead of checking into Puppet to retrieve new code, the nodes check out the code first and then run against it on a schedule.

Now that we have dealt with the load issue, we need to turn our attention to managing the modules to be applied to nodes. We will cover organizing the nodes in the next chapter.

Left arrow icon Right arrow icon

Description

Presented in an easy-to-follow, step-by-step tutorial format and packed with examples, this book will lead you through making the best out of Puppet in an enterprise environment. If you are a system administrator or developer who has used Puppet in production and are looking for ways to easily use Puppet in an enterprise environment, this book is for you. This book assumes an intermediate knowledge of Puppet and is intended for those writing modules or deploying Puppet in an enterprise environment.

What you will learn

  • Scale out your Puppet masters using proxy techniques
  • Automate Puppet master deployment using Git Hooks, r10k, and librarianpuppet
  • Access public modules from Git Forge and use them to solve realworld problems
  • Use Hiera and ENC to automatically assign modules to nodes
  • Create custom modules, facts, and types
  • Use exported resources to orchestrate changes across the enterprise
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 16, 2014
Length: 280 pages
Edition : 1st
Language : English
ISBN-13 : 9781783982189
Vendor :
Puppet
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Jul 16, 2014
Length: 280 pages
Edition : 1st
Language : English
ISBN-13 : 9781783982189
Vendor :
Puppet
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 110.97
Puppet Reporting and Monitoring
$32.99
Mastering Puppet
$47.99
Extending Puppet
$29.99
Total $ 110.97 Stars icon
Banner background image

Table of Contents

11 Chapters
1. Dealing with Load/Scale Chevron down icon Chevron up icon
2. Organizing Your Nodes and Data Chevron down icon Chevron up icon
3. Git and Environments Chevron down icon Chevron up icon
4. Public Modules Chevron down icon Chevron up icon
5. Custom Facts and Modules Chevron down icon Chevron up icon
6. Custom Types Chevron down icon Chevron up icon
7. Reporting and Orchestration Chevron down icon Chevron up icon
8. Exported Resources Chevron down icon Chevron up icon
9. Roles and Profiles Chevron down icon Chevron up icon
10. Troubleshooting Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.9
(13 Ratings)
5 star 30.8%
4 star 46.2%
3 star 7.7%
2 star 15.4%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Sep 21, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is the book that I wish I could've had 3 years ago when I was first setting up puppet. It walks you through setup of not only puppet, but all of the extras which go along with it. This is an excellent resource to hand to both new puppet users, as well as those who have been around the block a few times.Some of the goodies in this book: passenger, hiera, foreman, puppet dashboard, environments, puppetdb, git, popular puppet modules, reports and roles-profiles pattern, as well as a fairly extensive troubleshooting section with detailed information on what went wrong and how to fix it.This is definitely the book I'll be handing to new hires who need a ramp-up on puppet!
Amazon Verified review Amazon
James Jan 19, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I use it. Well written and just enough over my head that I can learn something.
Amazon Verified review Amazon
David Oct 20, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Well written, has lots of good ideas/techniques for more advanced puppet users
Amazon Verified review Amazon
Jascha Casadio Jan 11, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A very few people know that Puppet was released more than 10 years ago. Only recently, with the explosion of the cloud and the Internet of Things, having infrastructures able to scale out, deploying machines that self-configure themselves, that feeble buzz became a powerful roar and configuration management tools emerged as an indispensable tool in the belt of any DevOps populating planet Earth. Among the many flavors we can pick from, Puppet, which recently reached version 4, is a mature and solid choice. Still, the shelves of the book stores only provide a limited amount of titles to feed the hunger of knowledge of the many Puppetteers out there, who are often forced to spend the day either on the IRC support channel or browsing Stack Overflow. Mastering Puppet, which covers the previous version of Puppet, is an excellent companion for any experienced Puppetteer looking for a discussion on advanced topics.Before discussing the content of the book, as stated above, it is very important to make it crystal clear that Mastering Puppet does not cover Puppet 4, but the previous version of the software, that is 3. Significant non backward compatible changes were made to both the language and its configuration. While, as I will discuss in a moment, the book is still very valuable, it does require the reader to be aware of it and, mainly, to be already aware of what changed and thus, what, of the topics discussed by the author, no longer apply to the current stable version of Puppet. On the other hand, it is also true that many companies are waiting a bit longer before switching to Puppet 4, mostly because their code need to be refactored and also because tools, such as Foreman, are still being ported.As mentioned when introducing this review, Mastering Puppet, as the title suggests, does cover advanced topics. The reader is expected to know how to properly configure Puppet and write his own classes and modules. Concepts such as types and providers are supposed to be known. So, rather than introducing the language features and the basic commands to get started, the author focuses on topics such as deploying Puppet either as a master/slaves or masterless.The first chapter, for example, does exactly this. Not only does the author show both approaches, with their pros and cons; he also discusses how the scenario changes when the number of nodes significantly increases, making it impossible for a single master to take care of the whole infrastructure. While the differences of the different approaches can be already known to the reader, what I find interesting here is the approach of the author: presenting different solutions to a problem, taking into account scalability. What are the options? When is this solution better than the other? Why?As stated several times already, the book covers Puppet 4. This new version of Puppet strongly relies on Hiera as an external source of data. This means that, for example, chapter 2, which covers different strategies to organize the data, is somehow outdated, now. Still, the chapter is worth the read, not only because there are many infrastructure still relying on Puppet 3, but also because it is very informative to see how the author presents and compares different solutions to that common problem. Something similar happens in chapter 3, which is about environments. Puppet 4 enforces environments, but these pages are still very worth a read. Here the author presents different approaches to exploit environments to organize the data: a single hierarchy with the environment as a hierarchy item; and multiple hierarchies where the path to the hieradata comes from the environment setting itself.Among the other topics covered are reporting, where the author presents Syslog, IRC, Foreman and the Puppet Dashboard; and exported resources. Exported resources and puppetdb, which are part of chapter 8, are one of the concepts that I have particularly enjoyed reading. The examples presented by the author are clear and easy to follow and the concepts are concisely and exhaustively discussed.On top of all of this, throughout the book we find plenty of small boxes with tips to get the most out of a concept just discussed or to avoid common pitfalls. If I have to find something negative about this book, well, I could complain that chapter 9, which is about design patterns and roles, was too short. That is a very complex and important topic that, probably, deserves a book on its own.Overall, an excellent book. I am very happy with it. Despite being outdated, it still delivers much to any Puppetteers looking for material covering advanced topics. Definitely a suggested read.
Amazon Verified review Amazon
Szasz Tamas Sep 28, 2014
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I just received 3 weeks ago a copy of this book from Packt publishing. After reading over the book, I would say it's something what worth read if somebody is working with puppet in an enterprise and not only with 5-10 servers. I'm working with puppet since 3 years and had a lot of problem with scalability and dynamic structure in the past. Many of this problems are well described in this book with possible solutions over examples. I definitely recommend this book for professionals, who are working with number of servers more then 25-50, and on multiple environment / location. I only give 4 stars because the programming part (types and providers) is not well enough described compared with other books focused on implementing types and providers. The chapters scaling, organizing, reporting are especially interesting and worth to read *before* starting to use puppet in an enterprise with hundred of nodes.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela