Chapter 8. Internode Coordination
 | "Rest is not idleness, and to lie sometimes on the grass under trees on a summer's day, listening to the murmur of the water, or watching the clouds float across the sky, is by no means a waste of time." |  |
 | --John Lubbock |
In this chapter, we will cover the following recipes:
- Managing firewalls with iptables
- Building high-availability services using Heartbeat
- Managing NFS servers and file shares
- Using HAProxy to load-balance multiple web servers
- Managing Docker with Puppet
Introduction
As powerful as Puppet is to manage the configuration of a single server, it's even more useful when coordinating many machines. In this chapter, we'll explore ways to use Puppet to help you create high-availability clusters, share files across your network, set up automated firewalls, and use load-balancing to get more out of the machines you have. We'll use exported resources as the communication between nodes.
Managing firewalls with iptables
In this chapter, we will begin to configure services that require communication between hosts over the network. Most Linux distributions will default to running a host-based firewall, iptables. If you want your hosts to communicate with each other, you have two options: turn off iptables or configure iptables to allow the communication.
I prefer to leave iptables turned on and configure access. Keeping iptables is just another layer on your defense across the network. iptables isn't a magic bullet that will make your system secure, but it will block access to services you didn't intend to expose to the network.
Configuring iptables properly is a complicated task, which requires deep knowledge of networking. The example presented here is a simplification. If you are unfamiliar with iptables, I suggest you research iptables before continuing. More information can be found at http://wiki.centos.org/HowTos/Network/IPTables or https://help.ubuntu.com/community/IptablesHowTo.
Getting ready
In the following examples, we'll be using the Puppet Labs Firewall module to configure iptables. Prepare by installing the module into your Git repository with puppet module install
:
t@mylaptop ~ $ puppet module install -i ~/puppet/modules puppetlabs-firewall
Notice: Preparing to install into /home/thomas/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
/home/thomas/puppet/modules
└── puppetlabs-firewall (v1.2.0)
How to do it...
To configure the firewall module, we need to create a set of rules, which will be applied before all other rules. As a simple example, we'll create the following rules:
- Allow all traffic on the loopback (lo) interface
- Allow all ICMP traffic
- Allow all traffic that is part of an established connection (ESTABLISHED, RELATED)
- Allow all TCP traffic to port 22 (ssh)
We will create a myfw
(my firewall) class to configure the firewall module. We will then apply the myfw
class to a node to have iptables configured on that node:
- Create a class to contain these rules and call it
myfw::pre
:class myfw::pre { Firewall { require => undef, } firewall { '0000 Allow all traffic on loopback': proto => 'all', iniface => 'lo', action => 'accept', } firewall { '0001 Allow all ICMP': proto => 'icmp', action => 'accept', } firewall { '0002 Allow all established traffic': proto => 'all', state => ['RELATED', 'ESTABLISHED'], action => 'accept', } firewall { '0022 Allow all TCP on port 22 (ssh)': proto => 'tcp', port => '22', action => 'accept', } }
- When traffic doesn't match any of the previous rules, we want a final rule that will drop the traffic. Create the class
myfw::post
to contain the default drop rule:class myfw::post { firewall { '9999 Drop all other traffic': proto => 'all', action => 'drop', before => undef, } }
- Create a
myfw
class, which will includemyfw::pre
andmyfw::post
to configure the firewall:class myfw { include firewall # our rulesets include myfw::post include myfw::pre # clear all the rules resources { "firewall": purge => true } # resource defaults Firewall { before => Class['myfw::post'], require => Class['myfw::pre'], } }
- Attach the
myfw
class to a node definition; I'll do this to my cookbook node:node cookbook { include myfw }
- Run Puppet on cookbook to see whether the firewall rules have been applied:
[root@cookbook ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook.example.com Info: Applying configuration version '1415512948' Notice: /Stage[main]/Myfw::Pre/Firewall[000 Allow all traffic on loopback]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Myfw/Firewall[9003 49bcd611c61bdd18b235cea46ef04fae]/ensure: removed Notice: Finished catalog run in 15.65 seconds
- Verify the new rules with
iptables-save
:# Generated by iptables-save v1.4.7 on Sun Nov 9 01:18:30 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [74:35767] -A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT -A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT -A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport --ports 22 -m comment --comment "022 Allow all TCP on port 22 (ssh)" -j ACCEPT -A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP COMMIT # Completed on Sun Nov 9 01:18:30 2014
How it works...
This is a great example of how to use metaparameters to achieve a complex ordering with little effort. Our myfw
module achieves the following configuration:
All the rules in the myfw::pre
class are guaranteed to come before any other firewall rules we define. The rules in myfw::post
are guaranteed to come after any other firewall rules. So, we have the rules in myfw::pre
first, then any other rules, followed by the rules in myfw::post
.
Our definition for the myfw
class sets up this dependency with resource defaults:
# resource defaults
Firewall {
before => Class['myfw::post'],
require => Class['myfw::pre'],
}
These defaults first tell Puppet that any firewall resource should be executed before anything in the myfw::post
class. Second, they tell Puppet that any firewall resource should require that the resources in myfw::pre
already be executed.
When we defined the myfw::pre
class, we removed the require statement in a resource default for Firewall resources. This ensures that the resources within the myfw::pre-class don't require themselves before executing (Puppet will complain that we created a cyclic dependency otherwise):
Firewall {
require => undef,
}
We use the same trick in our myfw::post
definition. In this case, we only have a single rule in the post class, so we simply remove the before
requirement:
firewall { '9999 Drop all other traffic':
proto => 'all',
action => 'drop',
before => undef,
}
Finally, we include a rule to purge all the existing iptables rules on the system. We do this to ensure we have a consistent set of rules; only rules defined in Puppet will persist:
# clear all the rules
resources { "firewall":
purge => true
}
There's more...
As we hinted, we can now define firewall resources in our manifests and have them applied to the iptables configuration after the initialization rules (myfw::pre
) but before the final drop (myfw::post
). For example, to allow http traffic on our cookbook machine, modify the node definition as follows:
include myfw
firewall {'0080 Allow HTTP':
proto => 'tcp',
action => 'accept',
port => 80,
}
Run Puppet on cookbook:
[root@cookbook ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for cookbook.example.com
Info: Applying configuration version '1415515392'
Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'
Notice: /Stage[main]/Main/Node[cookbook]/Firewall[0080 Allow HTTP]/ensure: created
Notice: Finished catalog run in 2.74 seconds
Verify that the new rule has been added after the last myfw::pre rule (port 22, ssh):
[root@cookbook ~]# iptables-save
# Generated by iptables-save v1.4.7 on Sun Nov 9 01:46:38 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [41:26340]
-A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT
-A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT
-A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m multiport --ports 22 -m comment --comment "0022 Allow all TCP on port 22 (ssh)" -j ACCEPT
-A INPUT -p tcp -m multiport --ports 80 -m comment --comment "0080 Allow HTTP" -j ACCEPT
-A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP
COMMIT
# Completed on Sun Nov 9 01:46:38 2014
Tip
The Puppet Labs Firewall module has a built-in notion of order, all our firewall resource titles begin with a number. This is a requirement. The module attempts to order resources based on the title. You should keep this in mind when naming your firewall resources.
In the next section, we'll use our firewall module to ensure that two nodes can communicate as required.
examples, we'll be using the Puppet Labs Firewall module to configure iptables. Prepare by installing the module into your Git repository with puppet module install
:
t@mylaptop ~ $ puppet module install -i ~/puppet/modules puppetlabs-firewall
Notice: Preparing to install into /home/thomas/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
/home/thomas/puppet/modules
└── puppetlabs-firewall (v1.2.0)
How to do it...
To configure the firewall module, we need to create a set of rules, which will be applied before all other rules. As a simple example, we'll create the following rules:
- Allow all traffic on the loopback (lo) interface
- Allow all ICMP traffic
- Allow all traffic that is part of an established connection (ESTABLISHED, RELATED)
- Allow all TCP traffic to port 22 (ssh)
We will create a myfw
(my firewall) class to configure the firewall module. We will then apply the myfw
class to a node to have iptables configured on that node:
- Create a class to contain these rules and call it
myfw::pre
:class myfw::pre { Firewall { require => undef, } firewall { '0000 Allow all traffic on loopback': proto => 'all', iniface => 'lo', action => 'accept', } firewall { '0001 Allow all ICMP': proto => 'icmp', action => 'accept', } firewall { '0002 Allow all established traffic': proto => 'all', state => ['RELATED', 'ESTABLISHED'], action => 'accept', } firewall { '0022 Allow all TCP on port 22 (ssh)': proto => 'tcp', port => '22', action => 'accept', } }
- When traffic doesn't match any of the previous rules, we want a final rule that will drop the traffic. Create the class
myfw::post
to contain the default drop rule:class myfw::post { firewall { '9999 Drop all other traffic': proto => 'all', action => 'drop', before => undef, } }
- Create a
myfw
class, which will includemyfw::pre
andmyfw::post
to configure the firewall:class myfw { include firewall # our rulesets include myfw::post include myfw::pre # clear all the rules resources { "firewall": purge => true } # resource defaults Firewall { before => Class['myfw::post'], require => Class['myfw::pre'], } }
- Attach the
myfw
class to a node definition; I'll do this to my cookbook node:node cookbook { include myfw }
- Run Puppet on cookbook to see whether the firewall rules have been applied:
[root@cookbook ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook.example.com Info: Applying configuration version '1415512948' Notice: /Stage[main]/Myfw::Pre/Firewall[000 Allow all traffic on loopback]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Myfw/Firewall[9003 49bcd611c61bdd18b235cea46ef04fae]/ensure: removed Notice: Finished catalog run in 15.65 seconds
- Verify the new rules with
iptables-save
:# Generated by iptables-save v1.4.7 on Sun Nov 9 01:18:30 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [74:35767] -A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT -A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT -A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport --ports 22 -m comment --comment "022 Allow all TCP on port 22 (ssh)" -j ACCEPT -A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP COMMIT # Completed on Sun Nov 9 01:18:30 2014
How it works...
This is a great example of how to use metaparameters to achieve a complex ordering with little effort. Our myfw
module achieves the following configuration:
All the rules in the myfw::pre
class are guaranteed to come before any other firewall rules we define. The rules in myfw::post
are guaranteed to come after any other firewall rules. So, we have the rules in myfw::pre
first, then any other rules, followed by the rules in myfw::post
.
Our definition for the myfw
class sets up this dependency with resource defaults:
# resource defaults
Firewall {
before => Class['myfw::post'],
require => Class['myfw::pre'],
}
These defaults first tell Puppet that any firewall resource should be executed before anything in the myfw::post
class. Second, they tell Puppet that any firewall resource should require that the resources in myfw::pre
already be executed.
When we defined the myfw::pre
class, we removed the require statement in a resource default for Firewall resources. This ensures that the resources within the myfw::pre-class don't require themselves before executing (Puppet will complain that we created a cyclic dependency otherwise):
Firewall {
require => undef,
}
We use the same trick in our myfw::post
definition. In this case, we only have a single rule in the post class, so we simply remove the before
requirement:
firewall { '9999 Drop all other traffic':
proto => 'all',
action => 'drop',
before => undef,
}
Finally, we include a rule to purge all the existing iptables rules on the system. We do this to ensure we have a consistent set of rules; only rules defined in Puppet will persist:
# clear all the rules
resources { "firewall":
purge => true
}
There's more...
As we hinted, we can now define firewall resources in our manifests and have them applied to the iptables configuration after the initialization rules (myfw::pre
) but before the final drop (myfw::post
). For example, to allow http traffic on our cookbook machine, modify the node definition as follows:
include myfw
firewall {'0080 Allow HTTP':
proto => 'tcp',
action => 'accept',
port => 80,
}
Run Puppet on cookbook:
[root@cookbook ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for cookbook.example.com
Info: Applying configuration version '1415515392'
Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'
Notice: /Stage[main]/Main/Node[cookbook]/Firewall[0080 Allow HTTP]/ensure: created
Notice: Finished catalog run in 2.74 seconds
Verify that the new rule has been added after the last myfw::pre rule (port 22, ssh):
[root@cookbook ~]# iptables-save
# Generated by iptables-save v1.4.7 on Sun Nov 9 01:46:38 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [41:26340]
-A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT
-A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT
-A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m multiport --ports 22 -m comment --comment "0022 Allow all TCP on port 22 (ssh)" -j ACCEPT
-A INPUT -p tcp -m multiport --ports 80 -m comment --comment "0080 Allow HTTP" -j ACCEPT
-A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP
COMMIT
# Completed on Sun Nov 9 01:46:38 2014
Tip
The Puppet Labs Firewall module has a built-in notion of order, all our firewall resource titles begin with a number. This is a requirement. The module attempts to order resources based on the title. You should keep this in mind when naming your firewall resources.
In the next section, we'll use our firewall module to ensure that two nodes can communicate as required.
myfw
(my firewall) class to configure the firewall module. We will then apply the myfw
class to a node to have iptables configured on that node:
myfw::pre
:class myfw::pre { Firewall { require => undef, } firewall { '0000 Allow all traffic on loopback': proto => 'all', iniface => 'lo', action => 'accept', } firewall { '0001 Allow all ICMP': proto => 'icmp', action => 'accept', } firewall { '0002 Allow all established traffic': proto => 'all', state => ['RELATED', 'ESTABLISHED'], action => 'accept', } firewall { '0022 Allow all TCP on port 22 (ssh)': proto => 'tcp', port => '22', action => 'accept', } }
- doesn't match any of the previous rules, we want a final rule that will drop the traffic. Create the class
myfw::post
to contain the default drop rule:class myfw::post { firewall { '9999 Drop all other traffic': proto => 'all', action => 'drop', before => undef, } }
- Create a
myfw
class, which will includemyfw::pre
andmyfw::post
to configure the firewall:class myfw { include firewall # our rulesets include myfw::post include myfw::pre # clear all the rules resources { "firewall": purge => true } # resource defaults Firewall { before => Class['myfw::post'], require => Class['myfw::pre'], } }
- Attach the
myfw
class to a node definition; I'll do this to my cookbook node:node cookbook { include myfw }
- Run Puppet on cookbook to see whether the firewall rules have been applied:
[root@cookbook ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook.example.com Info: Applying configuration version '1415512948' Notice: /Stage[main]/Myfw::Pre/Firewall[000 Allow all traffic on loopback]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Myfw/Firewall[9003 49bcd611c61bdd18b235cea46ef04fae]/ensure: removed Notice: Finished catalog run in 15.65 seconds
- Verify the new rules with
iptables-save
:# Generated by iptables-save v1.4.7 on Sun Nov 9 01:18:30 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [74:35767] -A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT -A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT -A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport --ports 22 -m comment --comment "022 Allow all TCP on port 22 (ssh)" -j ACCEPT -A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP COMMIT # Completed on Sun Nov 9 01:18:30 2014
How it works...
This is a great example of how to use metaparameters to achieve a complex ordering with little effort. Our myfw
module achieves the following configuration:
All the rules in the myfw::pre
class are guaranteed to come before any other firewall rules we define. The rules in myfw::post
are guaranteed to come after any other firewall rules. So, we have the rules in myfw::pre
first, then any other rules, followed by the rules in myfw::post
.
Our definition for the myfw
class sets up this dependency with resource defaults:
# resource defaults
Firewall {
before => Class['myfw::post'],
require => Class['myfw::pre'],
}
These defaults first tell Puppet that any firewall resource should be executed before anything in the myfw::post
class. Second, they tell Puppet that any firewall resource should require that the resources in myfw::pre
already be executed.
When we defined the myfw::pre
class, we removed the require statement in a resource default for Firewall resources. This ensures that the resources within the myfw::pre-class don't require themselves before executing (Puppet will complain that we created a cyclic dependency otherwise):
Firewall {
require => undef,
}
We use the same trick in our myfw::post
definition. In this case, we only have a single rule in the post class, so we simply remove the before
requirement:
firewall { '9999 Drop all other traffic':
proto => 'all',
action => 'drop',
before => undef,
}
Finally, we include a rule to purge all the existing iptables rules on the system. We do this to ensure we have a consistent set of rules; only rules defined in Puppet will persist:
# clear all the rules
resources { "firewall":
purge => true
}
There's more...
As we hinted, we can now define firewall resources in our manifests and have them applied to the iptables configuration after the initialization rules (myfw::pre
) but before the final drop (myfw::post
). For example, to allow http traffic on our cookbook machine, modify the node definition as follows:
include myfw
firewall {'0080 Allow HTTP':
proto => 'tcp',
action => 'accept',
port => 80,
}
Run Puppet on cookbook:
[root@cookbook ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for cookbook.example.com
Info: Applying configuration version '1415515392'
Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'
Notice: /Stage[main]/Main/Node[cookbook]/Firewall[0080 Allow HTTP]/ensure: created
Notice: Finished catalog run in 2.74 seconds
Verify that the new rule has been added after the last myfw::pre rule (port 22, ssh):
[root@cookbook ~]# iptables-save
# Generated by iptables-save v1.4.7 on Sun Nov 9 01:46:38 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [41:26340]
-A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT
-A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT
-A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m multiport --ports 22 -m comment --comment "0022 Allow all TCP on port 22 (ssh)" -j ACCEPT
-A INPUT -p tcp -m multiport --ports 80 -m comment --comment "0080 Allow HTTP" -j ACCEPT
-A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP
COMMIT
# Completed on Sun Nov 9 01:46:38 2014
Tip
The Puppet Labs Firewall module has a built-in notion of order, all our firewall resource titles begin with a number. This is a requirement. The module attempts to order resources based on the title. You should keep this in mind when naming your firewall resources.
In the next section, we'll use our firewall module to ensure that two nodes can communicate as required.
example of how to use metaparameters to achieve a complex ordering with little effort. Our myfw
module achieves the following configuration:
All the rules in the myfw::pre
class are guaranteed to come before any other firewall rules we define. The rules in myfw::post
are guaranteed to come after any other firewall rules. So, we have the rules in myfw::pre
first, then any other rules, followed by the rules in myfw::post
.
Our definition for the myfw
class sets up this dependency with resource defaults:
# resource defaults
Firewall {
before => Class['myfw::post'],
require => Class['myfw::pre'],
}
These defaults first tell Puppet that any firewall resource should be executed before anything in the myfw::post
class. Second, they tell Puppet that any firewall resource should require that the resources in myfw::pre
already be executed.
When we defined the myfw::pre
class, we removed the require statement in a resource default for Firewall resources. This ensures that the resources within the myfw::pre-class don't require themselves before executing (Puppet will complain that we created a cyclic dependency otherwise):
Firewall {
require => undef,
}
We use the same trick in our myfw::post
definition. In this case, we only have a single rule in the post class, so we simply remove the before
requirement:
firewall { '9999 Drop all other traffic':
proto => 'all',
action => 'drop',
before => undef,
}
Finally, we include a rule to purge all the existing iptables rules on the system. We do this to ensure we have a consistent set of rules; only rules defined in Puppet will persist:
# clear all the rules
resources { "firewall":
purge => true
}
There's more...
As we hinted, we can now define firewall resources in our manifests and have them applied to the iptables configuration after the initialization rules (myfw::pre
) but before the final drop (myfw::post
). For example, to allow http traffic on our cookbook machine, modify the node definition as follows:
include myfw
firewall {'0080 Allow HTTP':
proto => 'tcp',
action => 'accept',
port => 80,
}
Run Puppet on cookbook:
[root@cookbook ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for cookbook.example.com
Info: Applying configuration version '1415515392'
Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'
Notice: /Stage[main]/Main/Node[cookbook]/Firewall[0080 Allow HTTP]/ensure: created
Notice: Finished catalog run in 2.74 seconds
Verify that the new rule has been added after the last myfw::pre rule (port 22, ssh):
[root@cookbook ~]# iptables-save
# Generated by iptables-save v1.4.7 on Sun Nov 9 01:46:38 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [41:26340]
-A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT
-A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT
-A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m multiport --ports 22 -m comment --comment "0022 Allow all TCP on port 22 (ssh)" -j ACCEPT
-A INPUT -p tcp -m multiport --ports 80 -m comment --comment "0080 Allow HTTP" -j ACCEPT
-A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP
COMMIT
# Completed on Sun Nov 9 01:46:38 2014
Tip
The Puppet Labs Firewall module has a built-in notion of order, all our firewall resource titles begin with a number. This is a requirement. The module attempts to order resources based on the title. You should keep this in mind when naming your firewall resources.
In the next section, we'll use our firewall module to ensure that two nodes can communicate as required.
myfw::pre
) but before the final drop (myfw::post
). For example, to allow http traffic on our cookbook machine, modify the node definition
as follows:
include myfw
firewall {'0080 Allow HTTP':
proto => 'tcp',
action => 'accept',
port => 80,
}
Run Puppet on cookbook:
[root@cookbook ~]# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for cookbook.example.com
Info: Applying configuration version '1415515392'
Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'
Notice: /Stage[main]/Main/Node[cookbook]/Firewall[0080 Allow HTTP]/ensure: created
Notice: Finished catalog run in 2.74 seconds
Verify that the new rule has been added after the last myfw::pre rule (port 22, ssh):
[root@cookbook ~]# iptables-save
# Generated by iptables-save v1.4.7 on Sun Nov 9 01:46:38 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [41:26340]
-A INPUT -i lo -m comment --comment "0000 Allow all traffic on loopback" -j ACCEPT
-A INPUT -p icmp -m comment --comment "0001 Allow all ICMP" -j ACCEPT
-A INPUT -m comment --comment "0002 Allow all established traffic" -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m multiport --ports 22 -m comment --comment "0022 Allow all TCP on port 22 (ssh)" -j ACCEPT
-A INPUT -p tcp -m multiport --ports 80 -m comment --comment "0080 Allow HTTP" -j ACCEPT
-A INPUT -m comment --comment "9999 Drop all other traffic" -j DROP
COMMIT
# Completed on Sun Nov 9 01:46:38 2014
Tip
The Puppet Labs Firewall module has a built-in notion of order, all our firewall resource titles begin with a number. This is a requirement. The module attempts to order resources based on the title. You should keep this in mind when naming your firewall resources.
In the next section, we'll use our firewall module to ensure that two nodes can communicate as required.
Building high-availability services using Heartbeat
High-availability services are those that can survive the failure of an individual machine or network connection. The primary technique for high availability is redundancy, otherwise known as throwing hardware at the problem. Although the eventual failure of an individual server is certain, the simultaneous failure of two servers is unlikely enough that this provides a good level of redundancy for most applications.
One of the simplest ways to build a redundant pair of servers is to have them share an IP address using Heartbeat. Heartbeat is a daemon that runs on both machines and exchanges regular messages—heartbeats—between the two. One server is the primary one, and normally has the resource; in this case, an IP address (known as a virtual IP, or VIP). If the secondary server fails to detect a heartbeat from the primary server, it can take over the address, ensuring continuity of service. In real-world scenarios, you may want more machines involved in the VIP, but for this example, two machines works well enough.
In this recipe, we'll set up two machines in this configuration using Puppet, and I'll explain how to use it to provide a high-availability service.
Getting ready
You'll need two machines, of course, and an extra IP address to use as the VIP. You can usually request this from your ISP, if necessary. In this example, I'll be using machines named cookbook
and cookbook2
, with cookbook
being the primary. We'll add the hosts to the heartbeat configuration.
How to do it…
Follow these steps to build the example:
- Create the file
modules/heartbeat/manifests/init.pp
with the following contents:# Manage Heartbeat class heartbeat { package { 'heartbeat': ensure => installed, } service { 'heartbeat': ensure => running, enable => true, require => Package['heartbeat'], } file { '/etc/ha.d/authkeys': content => "auth 1\n1 sha1 TopSecret", mode => '0600', require => Package['heartbeat'], notify => Service['heartbeat'], } include myfw firewall {'0694 Allow UDP ha-cluster': proto => 'udp', port => 694, action => 'accept', } }
- Create the file
modules/heartbeat/manifests/vip.pp
with the following contents:# Manage a specific VIP with Heartbeat class heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') { include heartbeat file { '/etc/ha.d/haresources': content => "${node1} IPaddr::${vip}/${interface}\n", require => Package['heartbeat'], notify => Service['heartbeat'], } file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), require => Package['heartbeat'], notify => Service['heartbeat'], } }
- Create the file
modules/heartbeat/templates/vip.ha.cf.erb
with the following contents:use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
- Modify your
site.pp
file as follows. Replace theip1
andip2
addresses with the primary IP addresses of your two nodes,vip
with the virtual IP address you'll be using, andnode1
andnode2
with the hostnames of the two nodes. (Heartbeat uses the fully-qualified domain name of a node to determine whether it's a member of the cluster, so the values fornode1
andnode2
should match what's given byfacter fqdn
on each machine.):node cookbook,cookbook2 { class { 'heartbeat::vip': ip1 => '192.168.122.132', ip2 => '192.168.122.133', node1 => 'cookbook.example.com', node2 => 'cookbook2.example.com', vip => '192.168.122.200/24', } }
- Run Puppet on each of the two servers:
[root@cookbook2 ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook2.example.com Info: Applying configuration version '1415517914' Notice: /Stage[main]/Heartbeat/Package[heartbeat]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0000 Allow all traffic on loopback]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]/ensure: defined content as '{md5}fb9f5d9d2b26e3bddf681676d8b2129c' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]/ensure: defined content as '{md5}84da22f7ac1a3629f69dcf29ccfd8592' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat/Service[heartbeat]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Heartbeat/Service[heartbeat]: Unscheduling refresh on Service[heartbeat] Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Heartbeat/Firewall[0694 Allow UDP ha-cluster]/ensure: created Notice: Finished catalog run in 12.64 seconds
- Verify that the VIP is running on one of the nodes (it should be on cookbook at this point; note that you will need to use the
ip
command,ifconfig
will not show the address):[root@cookbook ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:c9:d5:63 brd ff:ff:ff:ff:ff:ff inet 192.168.122.132/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:fec9:d563/64 scope link valid_lft forever preferred_lft forever
- As we can see, cookbook has the
eth0:1
interface active. If you stop heartbeat oncookbook
,cookbook2
will createeth0:1
and take over:[root@cookbook2 ~]# ip a show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:ee:9c:fa brd ff:ff:ff:ff:ff:ff inet 192.168.122.133/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:feee:9cfa/64 scope link valid_lft forever preferred_lft forever
How it works…
We need to install Heartbeat first of all, using the heartbeat
class:
# Manage Heartbeat
class heartbeat {
package { 'heartbeat':
ensure => installed,
}
...
}
Next, we use the heartbeat::vip
class to manage a specific virtual IP:
# Manage a specific VIP with Heartbeat
class
heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') {
include heartbeat
As you can see, the class includes an interface
parameter; by default, the VIP will be configured on eth0:1
, but if you need to use a different interface, you can pass it in using this parameter.
Each pair of servers that we configure with a virtual IP will use the heartbeat::vip
class with the same parameters. These will be used to build the haresources
file:
file { '/etc/ha.d/haresources':
content => "${node1} IPaddr::${vip}/${interface}\n",
notify => Service['heartbeat'],
require => Package['heartbeat'],
}
This tells Heartbeat about the resource it should manage (that's a Heartbeat resource, such as an IP address or a service, not a Puppet resource). The resulting haresources
file might look as follows:
cookbook.example.com IPaddr::192.168.122.200/24/eth0:1
The file is interpreted by Heartbeat as follows:
cookbook.example.com
: This is the name of the primary node, which should be the default owner of the resourceIPaddr
: This is the type of resource to manage; in this case, an IP address192.168.122.200/24
: This is the value for the IP addresseth0:1
: This is the virtual interface to configure with the managed IP address
For more information on how heartbeat is configured, please visit the high-availability site at http://linux-ha.org/wiki/Heartbeat.
We will also build the ha.cf
file that tells Heartbeat how to communicate between cluster nodes:
file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), notify => Service['heartbeat'], require => Package['heartbeat'], }
To do this, we use the template file:
use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
The interesting values here are the IP addresses of the two nodes (ip1
and ip2
), and the names of the two nodes (node1
and node2
).
Finally, we create an instance of heartbeat::vip
on both machines and pass it an identical set of parameters as follows:
class { 'heartbeat::vip':
ip1 => '192.168.122.132',
ip2 => '192.168.122.133',
node1 => 'cookbook.example.com',
node2 => 'cookbook2.example.com',
vip => '192.168.122.200/24',
}
There's more...
With Heartbeat set up as described in the example, the virtual IP address will be configured on cookbook
by default. If something happens to interfere with this (for example, if you halt or reboot cookbook
, or stop the heartbeat
service, or the machine loses network connectivity), cookbook2
will immediately take over the virtual IP.
The auto_failback
setting in ha.cf
governs what happens next. If auto_failback
is set to on
, when cookbook
becomes available once more, it will automatically take over the IP address. Without auto_failback
, the IP will stay where it is until you manually fail it again (by stopping heartbeart
on cookbook2
, for example).
One common use for a Heartbeat-managed virtual IP is to provide a highly available website or service. To do this, you need to set the DNS name for the service (for example, cat-pictures.com
) to point to the virtual IP. Requests for the service will be routed to whichever of the two servers currently has the virtual IP. If this server should go down, requests will go to the other, with no visible interruption in service to users.
Heartbeat works great for the previous example but is not in widespread use in this form. Heartbeat only works in two node clusters; for n-node clusters, the newer pacemaker project should be used. More information on Heartbeat, pacemaker, corosync, and other clustering packages can be found at http://www.linux-ha.org/wiki/Main_Page.
Managing cluster configuration is one area where exported resources are useful. Each node in a cluster would export information about itself, which could then be collected by the other members of the cluster. Using the puppetlabs-concat module, you can build up a configuration file using exported concat fragments from all the nodes in the cluster.
Remember to look at the Forge before starting your own module. If nothing else, you'll get some ideas that you can use in your own module. Corosync can be managed with the Puppet labs module at https://forge.puppetlabs.com/puppetlabs/corosync.
cookbook
and cookbook2
, with cookbook
being the primary. We'll add the hosts to the heartbeat configuration.
How to do it…
Follow these steps to build the example:
- Create the file
modules/heartbeat/manifests/init.pp
with the following contents:# Manage Heartbeat class heartbeat { package { 'heartbeat': ensure => installed, } service { 'heartbeat': ensure => running, enable => true, require => Package['heartbeat'], } file { '/etc/ha.d/authkeys': content => "auth 1\n1 sha1 TopSecret", mode => '0600', require => Package['heartbeat'], notify => Service['heartbeat'], } include myfw firewall {'0694 Allow UDP ha-cluster': proto => 'udp', port => 694, action => 'accept', } }
- Create the file
modules/heartbeat/manifests/vip.pp
with the following contents:# Manage a specific VIP with Heartbeat class heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') { include heartbeat file { '/etc/ha.d/haresources': content => "${node1} IPaddr::${vip}/${interface}\n", require => Package['heartbeat'], notify => Service['heartbeat'], } file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), require => Package['heartbeat'], notify => Service['heartbeat'], } }
- Create the file
modules/heartbeat/templates/vip.ha.cf.erb
with the following contents:use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
- Modify your
site.pp
file as follows. Replace theip1
andip2
addresses with the primary IP addresses of your two nodes,vip
with the virtual IP address you'll be using, andnode1
andnode2
with the hostnames of the two nodes. (Heartbeat uses the fully-qualified domain name of a node to determine whether it's a member of the cluster, so the values fornode1
andnode2
should match what's given byfacter fqdn
on each machine.):node cookbook,cookbook2 { class { 'heartbeat::vip': ip1 => '192.168.122.132', ip2 => '192.168.122.133', node1 => 'cookbook.example.com', node2 => 'cookbook2.example.com', vip => '192.168.122.200/24', } }
- Run Puppet on each of the two servers:
[root@cookbook2 ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook2.example.com Info: Applying configuration version '1415517914' Notice: /Stage[main]/Heartbeat/Package[heartbeat]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0000 Allow all traffic on loopback]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]/ensure: defined content as '{md5}fb9f5d9d2b26e3bddf681676d8b2129c' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]/ensure: defined content as '{md5}84da22f7ac1a3629f69dcf29ccfd8592' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat/Service[heartbeat]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Heartbeat/Service[heartbeat]: Unscheduling refresh on Service[heartbeat] Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Heartbeat/Firewall[0694 Allow UDP ha-cluster]/ensure: created Notice: Finished catalog run in 12.64 seconds
- Verify that the VIP is running on one of the nodes (it should be on cookbook at this point; note that you will need to use the
ip
command,ifconfig
will not show the address):[root@cookbook ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:c9:d5:63 brd ff:ff:ff:ff:ff:ff inet 192.168.122.132/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:fec9:d563/64 scope link valid_lft forever preferred_lft forever
- As we can see, cookbook has the
eth0:1
interface active. If you stop heartbeat oncookbook
,cookbook2
will createeth0:1
and take over:[root@cookbook2 ~]# ip a show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:ee:9c:fa brd ff:ff:ff:ff:ff:ff inet 192.168.122.133/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:feee:9cfa/64 scope link valid_lft forever preferred_lft forever
How it works…
We need to install Heartbeat first of all, using the heartbeat
class:
# Manage Heartbeat
class heartbeat {
package { 'heartbeat':
ensure => installed,
}
...
}
Next, we use the heartbeat::vip
class to manage a specific virtual IP:
# Manage a specific VIP with Heartbeat
class
heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') {
include heartbeat
As you can see, the class includes an interface
parameter; by default, the VIP will be configured on eth0:1
, but if you need to use a different interface, you can pass it in using this parameter.
Each pair of servers that we configure with a virtual IP will use the heartbeat::vip
class with the same parameters. These will be used to build the haresources
file:
file { '/etc/ha.d/haresources':
content => "${node1} IPaddr::${vip}/${interface}\n",
notify => Service['heartbeat'],
require => Package['heartbeat'],
}
This tells Heartbeat about the resource it should manage (that's a Heartbeat resource, such as an IP address or a service, not a Puppet resource). The resulting haresources
file might look as follows:
cookbook.example.com IPaddr::192.168.122.200/24/eth0:1
The file is interpreted by Heartbeat as follows:
cookbook.example.com
: This is the name of the primary node, which should be the default owner of the resourceIPaddr
: This is the type of resource to manage; in this case, an IP address192.168.122.200/24
: This is the value for the IP addresseth0:1
: This is the virtual interface to configure with the managed IP address
For more information on how heartbeat is configured, please visit the high-availability site at http://linux-ha.org/wiki/Heartbeat.
We will also build the ha.cf
file that tells Heartbeat how to communicate between cluster nodes:
file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), notify => Service['heartbeat'], require => Package['heartbeat'], }
To do this, we use the template file:
use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
The interesting values here are the IP addresses of the two nodes (ip1
and ip2
), and the names of the two nodes (node1
and node2
).
Finally, we create an instance of heartbeat::vip
on both machines and pass it an identical set of parameters as follows:
class { 'heartbeat::vip':
ip1 => '192.168.122.132',
ip2 => '192.168.122.133',
node1 => 'cookbook.example.com',
node2 => 'cookbook2.example.com',
vip => '192.168.122.200/24',
}
There's more...
With Heartbeat set up as described in the example, the virtual IP address will be configured on cookbook
by default. If something happens to interfere with this (for example, if you halt or reboot cookbook
, or stop the heartbeat
service, or the machine loses network connectivity), cookbook2
will immediately take over the virtual IP.
The auto_failback
setting in ha.cf
governs what happens next. If auto_failback
is set to on
, when cookbook
becomes available once more, it will automatically take over the IP address. Without auto_failback
, the IP will stay where it is until you manually fail it again (by stopping heartbeart
on cookbook2
, for example).
One common use for a Heartbeat-managed virtual IP is to provide a highly available website or service. To do this, you need to set the DNS name for the service (for example, cat-pictures.com
) to point to the virtual IP. Requests for the service will be routed to whichever of the two servers currently has the virtual IP. If this server should go down, requests will go to the other, with no visible interruption in service to users.
Heartbeat works great for the previous example but is not in widespread use in this form. Heartbeat only works in two node clusters; for n-node clusters, the newer pacemaker project should be used. More information on Heartbeat, pacemaker, corosync, and other clustering packages can be found at http://www.linux-ha.org/wiki/Main_Page.
Managing cluster configuration is one area where exported resources are useful. Each node in a cluster would export information about itself, which could then be collected by the other members of the cluster. Using the puppetlabs-concat module, you can build up a configuration file using exported concat fragments from all the nodes in the cluster.
Remember to look at the Forge before starting your own module. If nothing else, you'll get some ideas that you can use in your own module. Corosync can be managed with the Puppet labs module at https://forge.puppetlabs.com/puppetlabs/corosync.
modules/heartbeat/manifests/init.pp
with the following contents:# Manage Heartbeat class heartbeat { package { 'heartbeat': ensure => installed, } service { 'heartbeat': ensure => running, enable => true, require => Package['heartbeat'], } file { '/etc/ha.d/authkeys': content => "auth 1\n1 sha1 TopSecret", mode => '0600', require => Package['heartbeat'], notify => Service['heartbeat'], } include myfw firewall {'0694 Allow UDP ha-cluster': proto => 'udp', port => 694, action => 'accept', } }
- file
modules/heartbeat/manifests/vip.pp
with the following contents:# Manage a specific VIP with Heartbeat class heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') { include heartbeat file { '/etc/ha.d/haresources': content => "${node1} IPaddr::${vip}/${interface}\n", require => Package['heartbeat'], notify => Service['heartbeat'], } file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), require => Package['heartbeat'], notify => Service['heartbeat'], } }
- Create the file
modules/heartbeat/templates/vip.ha.cf.erb
with the following contents:use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
- Modify your
site.pp
file as follows. Replace theip1
andip2
addresses with the primary IP addresses of your two nodes,vip
with the virtual IP address you'll be using, andnode1
andnode2
with the hostnames of the two nodes. (Heartbeat uses the fully-qualified domain name of a node to determine whether it's a member of the cluster, so the values fornode1
andnode2
should match what's given byfacter fqdn
on each machine.):node cookbook,cookbook2 { class { 'heartbeat::vip': ip1 => '192.168.122.132', ip2 => '192.168.122.133', node1 => 'cookbook.example.com', node2 => 'cookbook2.example.com', vip => '192.168.122.200/24', } }
- Run Puppet on each of the two servers:
[root@cookbook2 ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for cookbook2.example.com Info: Applying configuration version '1415517914' Notice: /Stage[main]/Heartbeat/Package[heartbeat]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0000 Allow all traffic on loopback]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]/ensure: defined content as '{md5}fb9f5d9d2b26e3bddf681676d8b2129c' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/haresources]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]/ensure: defined content as '{md5}84da22f7ac1a3629f69dcf29ccfd8592' Info: /Stage[main]/Heartbeat::Vip/File[/etc/ha.d/ha.cf]: Scheduling refresh of Service[heartbeat] Notice: /Stage[main]/Heartbeat/Service[heartbeat]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Heartbeat/Service[heartbeat]: Unscheduling refresh on Service[heartbeat] Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Heartbeat/Firewall[0694 Allow UDP ha-cluster]/ensure: created Notice: Finished catalog run in 12.64 seconds
- Verify that the VIP is running on one of the nodes (it should be on cookbook at this point; note that you will need to use the
ip
command,ifconfig
will not show the address):[root@cookbook ~]# ip addr show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:c9:d5:63 brd ff:ff:ff:ff:ff:ff inet 192.168.122.132/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:fec9:d563/64 scope link valid_lft forever preferred_lft forever
- As we can see, cookbook has the
eth0:1
interface active. If you stop heartbeat oncookbook
,cookbook2
will createeth0:1
and take over:[root@cookbook2 ~]# ip a show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:ee:9c:fa brd ff:ff:ff:ff:ff:ff inet 192.168.122.133/24 brd 192.168.122.255 scope global eth0 inet 192.168.122.200/24 brd 192.168.122.255 scope global secondary eth0:1 inet6 fe80::5054:ff:feee:9cfa/64 scope link valid_lft forever preferred_lft forever
How it works…
We need to install Heartbeat first of all, using the heartbeat
class:
# Manage Heartbeat
class heartbeat {
package { 'heartbeat':
ensure => installed,
}
...
}
Next, we use the heartbeat::vip
class to manage a specific virtual IP:
# Manage a specific VIP with Heartbeat
class
heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') {
include heartbeat
As you can see, the class includes an interface
parameter; by default, the VIP will be configured on eth0:1
, but if you need to use a different interface, you can pass it in using this parameter.
Each pair of servers that we configure with a virtual IP will use the heartbeat::vip
class with the same parameters. These will be used to build the haresources
file:
file { '/etc/ha.d/haresources':
content => "${node1} IPaddr::${vip}/${interface}\n",
notify => Service['heartbeat'],
require => Package['heartbeat'],
}
This tells Heartbeat about the resource it should manage (that's a Heartbeat resource, such as an IP address or a service, not a Puppet resource). The resulting haresources
file might look as follows:
cookbook.example.com IPaddr::192.168.122.200/24/eth0:1
The file is interpreted by Heartbeat as follows:
cookbook.example.com
: This is the name of the primary node, which should be the default owner of the resourceIPaddr
: This is the type of resource to manage; in this case, an IP address192.168.122.200/24
: This is the value for the IP addresseth0:1
: This is the virtual interface to configure with the managed IP address
For more information on how heartbeat is configured, please visit the high-availability site at http://linux-ha.org/wiki/Heartbeat.
We will also build the ha.cf
file that tells Heartbeat how to communicate between cluster nodes:
file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), notify => Service['heartbeat'], require => Package['heartbeat'], }
To do this, we use the template file:
use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
The interesting values here are the IP addresses of the two nodes (ip1
and ip2
), and the names of the two nodes (node1
and node2
).
Finally, we create an instance of heartbeat::vip
on both machines and pass it an identical set of parameters as follows:
class { 'heartbeat::vip':
ip1 => '192.168.122.132',
ip2 => '192.168.122.133',
node1 => 'cookbook.example.com',
node2 => 'cookbook2.example.com',
vip => '192.168.122.200/24',
}
There's more...
With Heartbeat set up as described in the example, the virtual IP address will be configured on cookbook
by default. If something happens to interfere with this (for example, if you halt or reboot cookbook
, or stop the heartbeat
service, or the machine loses network connectivity), cookbook2
will immediately take over the virtual IP.
The auto_failback
setting in ha.cf
governs what happens next. If auto_failback
is set to on
, when cookbook
becomes available once more, it will automatically take over the IP address. Without auto_failback
, the IP will stay where it is until you manually fail it again (by stopping heartbeart
on cookbook2
, for example).
One common use for a Heartbeat-managed virtual IP is to provide a highly available website or service. To do this, you need to set the DNS name for the service (for example, cat-pictures.com
) to point to the virtual IP. Requests for the service will be routed to whichever of the two servers currently has the virtual IP. If this server should go down, requests will go to the other, with no visible interruption in service to users.
Heartbeat works great for the previous example but is not in widespread use in this form. Heartbeat only works in two node clusters; for n-node clusters, the newer pacemaker project should be used. More information on Heartbeat, pacemaker, corosync, and other clustering packages can be found at http://www.linux-ha.org/wiki/Main_Page.
Managing cluster configuration is one area where exported resources are useful. Each node in a cluster would export information about itself, which could then be collected by the other members of the cluster. Using the puppetlabs-concat module, you can build up a configuration file using exported concat fragments from all the nodes in the cluster.
Remember to look at the Forge before starting your own module. If nothing else, you'll get some ideas that you can use in your own module. Corosync can be managed with the Puppet labs module at https://forge.puppetlabs.com/puppetlabs/corosync.
Heartbeat first of all, using the heartbeat
class:
# Manage Heartbeat
class heartbeat {
package { 'heartbeat':
ensure => installed,
}
...
}
Next, we use the heartbeat::vip
class to manage a specific virtual IP:
# Manage a specific VIP with Heartbeat
class
heartbeat::vip($node1,$node2,$ip1,$ip2,$vip,$interface='eth0:1') {
include heartbeat
As you can see, the class includes an interface
parameter; by default, the VIP will be configured on eth0:1
, but if you need to use a different interface, you can pass it in using this parameter.
Each pair of servers that we configure with a virtual IP will use the heartbeat::vip
class with the same parameters. These will be used to build the haresources
file:
file { '/etc/ha.d/haresources':
content => "${node1} IPaddr::${vip}/${interface}\n",
notify => Service['heartbeat'],
require => Package['heartbeat'],
}
This tells Heartbeat about the resource it should manage (that's a Heartbeat resource, such as an IP address or a service, not a Puppet resource). The resulting haresources
file might look as follows:
cookbook.example.com IPaddr::192.168.122.200/24/eth0:1
The file is interpreted by Heartbeat as follows:
cookbook.example.com
: This is the name of the primary node, which should be the default owner of the resourceIPaddr
: This is the type of resource to manage; in this case, an IP address192.168.122.200/24
: This is the value for the IP addresseth0:1
: This is the virtual interface to configure with the managed IP address
For more information on how heartbeat is configured, please visit the high-availability site at http://linux-ha.org/wiki/Heartbeat.
We will also build the ha.cf
file that tells Heartbeat how to communicate between cluster nodes:
file { '/etc/ha.d/ha.cf': content => template('heartbeat/vip.ha.cf.erb'), notify => Service['heartbeat'], require => Package['heartbeat'], }
To do this, we use the template file:
use_logd yes udpport 694 autojoin none ucast eth0 <%= @ip1 %> ucast eth0 <%= @ip2 %> keepalive 1 deadtime 10 warntime 5 auto_failback off node <%= @node1 %> node <%= @node2 %>
The interesting values here are the IP addresses of the two nodes (ip1
and ip2
), and the names of the two nodes (node1
and node2
).
Finally, we create an instance of heartbeat::vip
on both machines and pass it an identical set of parameters as follows:
class { 'heartbeat::vip':
ip1 => '192.168.122.132',
ip2 => '192.168.122.133',
node1 => 'cookbook.example.com',
node2 => 'cookbook2.example.com',
vip => '192.168.122.200/24',
}
There's more...
With Heartbeat set up as described in the example, the virtual IP address will be configured on cookbook
by default. If something happens to interfere with this (for example, if you halt or reboot cookbook
, or stop the heartbeat
service, or the machine loses network connectivity), cookbook2
will immediately take over the virtual IP.
The auto_failback
setting in ha.cf
governs what happens next. If auto_failback
is set to on
, when cookbook
becomes available once more, it will automatically take over the IP address. Without auto_failback
, the IP will stay where it is until you manually fail it again (by stopping heartbeart
on cookbook2
, for example).
One common use for a Heartbeat-managed virtual IP is to provide a highly available website or service. To do this, you need to set the DNS name for the service (for example, cat-pictures.com
) to point to the virtual IP. Requests for the service will be routed to whichever of the two servers currently has the virtual IP. If this server should go down, requests will go to the other, with no visible interruption in service to users.
Heartbeat works great for the previous example but is not in widespread use in this form. Heartbeat only works in two node clusters; for n-node clusters, the newer pacemaker project should be used. More information on Heartbeat, pacemaker, corosync, and other clustering packages can be found at http://www.linux-ha.org/wiki/Main_Page.
Managing cluster configuration is one area where exported resources are useful. Each node in a cluster would export information about itself, which could then be collected by the other members of the cluster. Using the puppetlabs-concat module, you can build up a configuration file using exported concat fragments from all the nodes in the cluster.
Remember to look at the Forge before starting your own module. If nothing else, you'll get some ideas that you can use in your own module. Corosync can be managed with the Puppet labs module at https://forge.puppetlabs.com/puppetlabs/corosync.
cookbook
by default. If something happens to interfere with this (for example, if you halt or reboot cookbook
, or stop the heartbeat
service, or the machine loses network connectivity), cookbook2
will immediately take over the virtual IP.
auto_failback
setting
in ha.cf
governs what happens next. If auto_failback
is set to on
, when cookbook
becomes available once more, it will automatically take over the IP address. Without auto_failback
, the IP will stay where it is until you manually fail it again (by stopping heartbeart
on cookbook2
, for example).
One common use for a Heartbeat-managed virtual IP is to provide a highly available website or service. To do this, you need to set the DNS name for the service (for example, cat-pictures.com
) to point to the virtual IP. Requests for the service will be routed to whichever of the two servers currently has the virtual IP. If this server should go down, requests will go to the other, with no visible interruption in service to users.
Heartbeat works great for the previous example but is not in widespread use in this form. Heartbeat only works in two node clusters; for n-node clusters, the newer pacemaker project should be used. More information on Heartbeat, pacemaker, corosync, and other clustering packages can be found at http://www.linux-ha.org/wiki/Main_Page.
Managing cluster configuration is one area where exported resources are useful. Each node in a cluster would export information about itself, which could then be collected by the other members of the cluster. Using the puppetlabs-concat module, you can build up a configuration file using exported concat fragments from all the nodes in the cluster.
Remember to look at the Forge before starting your own module. If nothing else, you'll get some ideas that you can use in your own module. Corosync can be managed with the Puppet labs module at https://forge.puppetlabs.com/puppetlabs/corosync.
Managing NFS servers and file shares
NFS (Network File System) is a protocol to mount a shared directory from a remote server. For example, a pool of web servers might all mount the same NFS share to serve static assets such as images and stylesheets. Although NFS is generally slower and less secure than local storage or a clustered filesystem, the ease with which it can be used makes it a common choice in the datacenter. We'll use our myfw
module from before to ensure the local firewall permits nfs
communication. We'll also use the Puppet labs-concat module to edit the list of exported filesystems on our nfs
server.
How to do it...
In this example, we'll configure an nfs
server to share (export) some filesystem via NFS.
- Create an
nfs
module with the followingnfs::exports
class, which defines a concat resource:class nfs::exports { exec {'nfs::exportfs': command => 'exportfs -a', refreshonly => true, path => '/usr/bin:/bin:/sbin:/usr/sbin', } concat {'/etc/exports': notify => Exec['nfs::exportfs'], } }
- Create the
nfs::export
defined type, we'll use this definition for anynfs
exports we create:define nfs::export ( $where = $title, $who = '*', $options = 'async,ro', $mount_options = 'defaults', $tag = 'nfs' ) { # make sure the directory exists # export the entry locally, then export a resource to be picked up later. file {"$where": ensure => 'directory', } include nfs::exports concat::fragment { "nfs::export::$where": content => "${where} ${who}(${options})\n", target => '/etc/exports' } @@mount { "nfs::export::${where}::${::ipaddress}": name => "$where", ensure => 'mounted', fstype => 'nfs', options => "$mount_options", device => "${::ipaddress}:${where}", tag => "$tag", } }
- Now create the
nfs::server
class, which will include the OS-specific configuration for the server:class nfs::server { # ensure nfs server is running # firewall should allow nfs communication include nfs::exports case $::osfamily { 'RedHat': { include nfs::server::redhat } 'Debian': { include nfs::server::debian } } include myfw firewall {'2049 NFS TCP communication': proto => 'tcp', port => '2049', action => 'accept', } firewall {'2049 UDP NFS communication': proto => 'udp', port => '2049', action => 'accept', } firewall {'0111 TCP PORTMAP': proto => 'tcp', port => '111', action => 'accept', } firewall {'0111 UDP PORTMAP': proto => 'udp', port => '111', action => 'accept', } firewall {'4000 TCP STAT': proto => 'tcp', port => '4000-4010', action => 'accept', } firewall {'4000 UDP STAT': proto => 'udp', port => '4000-4010', action => 'accept', } }
- Next, create the
nfs::server::redhat
class:class nfs::server::redhat { package {'nfs-utils': ensure => 'installed', } service {'nfs': ensure => 'running', enable => true } file {'/etc/sysconfig/nfs': source => 'puppet:///modules/nfs/nfs', mode => 0644, notify => Service['nfs'], } }
- Create the
/etc/sysconfig/nfs
support file for RedHat systems in the files directory of ournfs
repo (modules/nfs/files/nfs
):STATD_PORT=4000 STATD_OUTGOING_PORT=4001 RQUOTAD_PORT=4002 LOCKD_TCPPORT=4003 LOCKD_UDPPORT=4003 MOUNTD_PORT=4004
- Now create the support class for Debian systems,
nfs::server::debian
:class nfs::server::debian { # install the package package {'nfs': name => 'nfs-kernel-server', ensure => 'installed', } # config file {'/etc/default/nfs-common': source => 'puppet:///modules/nfs/nfs-common', mode => 0644, notify => Service['nfs-common'] } # services service {'nfs-common': ensure => 'running', enable => true, } service {'nfs': name => 'nfs-kernel-server', ensure => 'running', enable => true, require => Package['nfs-kernel-server'] } }
- Create the nfs-common configuration for Debian (which will be placed in
modules/nfs/files/nfs-common
):STATDOPTS="--port 4000 --outgoing-port 4001"
- Apply the
nfs::server
class to a node and then create an export on that node:node debian { include nfs::server nfs::export {'/srv/home': tag => "srv_home" } }
- Create a collector for the exported resource created by the
nfs::server
class in the preceding code snippet:node cookbook { Mount <<| tag == "srv_home" |>> { name => '/mnt', } }
- Finally, run Puppet on the node Debian to create the exported resource. Then, run Puppet on the cookbook node to mount that resource:
root@debian:~# puppet agent -t Info: Caching catalog for debian.example.com Info: Applying configuration version '1415602532' Notice: Finished catalog run in 0.78 seconds [root@cookbook ~]# puppet agent -t Info: Caching catalog for cookbook.example.com Info: Applying configuration version '1415603580' Notice: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]/ensure: ensure changed 'ghost' to 'mounted' Info: Computing checksum on file /etc/fstab Info: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Scheduling refresh of Mount[nfs::export::/srv/home::192.168.122.148] Info: Mount[nfs::export::/srv/home::192.168.122.148](provider=parsed): Remounting Notice: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Triggered 'refresh' from 1 events Info: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Scheduling refresh of Mount[nfs::export::/srv/home::192.168.122.148] Notice: Finished catalog run in 0.34 seconds
- Verify the mount with
mount
:[root@cookbook ~]# mount -t nfs 192.168.122.148:/srv/home on /mnt type nfs (rw)
How it works…
The nfs::exports
class defines an exec, which runs 'exportfs -a'
, to export all filesystems defined in /etc/exports
. Next, we define a concat resource to contain concat::fragments
, which we will define next in our nfs::export
class. Concat resources specify the file that the fragments are to be placed into; /etc/exports
in this case. Our concat
resource has a notify for the previous exec. This has the effect that whenever /etc/exports
is updated, we run 'exportfs -a'
again to export the new entries:
class nfs::exports {
exec {'nfs::exportfs':
command => 'exportfs -a',
refreshonly => true,
path => '/usr/bin:/bin:/sbin:/usr/sbin',
}
concat {'/etc/exports':
notify => Exec['nfs::exportfs'],
}
}
We then created an nfs::export
defined type, which does all the work. The defined type adds an entry to /etc/exports
via a concat::fragment
resource:
define nfs::export (
$where = $title,
$who = '*',
$options = 'async,ro',
$mount_options = 'defaults',
$tag = 'nfs'
) {
# make sure the directory exists
# export the entry locally, then export a resource to be picked up later.
file {"$where":
ensure => 'directory',
}
include nfs::exports
concat::fragment { "nfs::export::$where":
content => "${where} ${who}(${options})\n",
target => '/etc/exports'
}
In the definition, we use the attribute $where
to define what filesystem we are exporting. We use $who
to specify who can mount the filesystem. The attribute $options
contains the exporting options such as
rw (read-write),
ro (read-only). Next, we have the options that will be placed in /etc/fstab
on the client machine, the mount options, stored in $mount_options
. The nfs::exports
class is included here so that concat::fragment
has a concat target defined.
Next, the exported mount resource is created; this is done on the server, so the ${::ipaddress}
variable holds the IP address of the server. We use this to define the device for the mount. The device is the IP address of the server, a colon, and then the filesystem being exported. In this example, it is '192.168.122.148:/srv/home'
:
@@mount { "nfs::export::${where}::${::ipaddress}":
name => "$where",
ensure => 'mounted',
fstype => 'nfs',
options => "$mount_options",
device => "${::ipaddress}:${where}",
tag => "$tag",
}
We reuse our myfw
module and include it in the nfs::server
class. This class illustrates one of the things to consider when writing your modules. Not all Linux distributions are created equal. Debian and RedHat deal with NFS server configuration quite differently. The nfs::server
module deals with this by including OS-specific subclasses:
class nfs::server {
# ensure nfs server is running
# firewall should allow nfs communication
include nfs::exports
case $::osfamily {
'RedHat': { include nfs::server::redhat }
'Debian': { include nfs::server::debian }
}
include myfw
firewall {'2049 NFS TCP communication':
proto => 'tcp',
port => '2049',
action => 'accept',
}
firewall {'2049 UDP NFS communication':
proto => 'udp',
port => '2049',
action => 'accept',
}
firewall {'0111 TCP PORTMAP':
proto => 'tcp',
port => '111',
action => 'accept',
}
firewall {'0111 UDP PORTMAP':
proto => 'udp',
port => '111',
action => 'accept',
}
firewall {'4000 TCP STAT':
proto => 'tcp',
port => '4000-4010',
action => 'accept',
}
firewall {'4000 UDP STAT':
proto => 'udp',
port => '4000-4010',
action => 'accept',
}
}
The nfs::server
module opens several firewall ports for NFS communication. NFS traffic is always carried over port 2049 but ancillary systems, such as locking, quota, and file status daemons, use ephemeral ports chosen by the portmapper, by default. The portmapper itself uses port 111. So our module needs to allow 2049, 111, and a few other ports. We attempt to configure the ancillary services to use ports 4000 through 4010.
In the nfs::server::redhat
class, we modify /etc/sysconfig/nfs
to use the ports specified. Also, we install the nfs-utils package and start the nfs service:
class nfs::server::redhat {
package {'nfs-utils':
ensure => 'installed',
}
service {'nfs':
ensure => 'running',
enable => true
}
file {'/etc/sysconfig/nfs':
source => 'puppet:///modules/nfs/nfs',
mode => 0644,
notify => Service['nfs'],
}
}
We do the same for Debian-based systems in the nfs::server::debian
class. The packages and services have different names but overall the process is similar:
class nfs::server::debian {
# install the package
package {'nfs':
name => 'nfs-kernel-server',
ensure => 'installed',
}
# config
file {'/etc/default/nfs-common':
source => 'puppet:///modules/nfs/nfs-common',
mode => 0644,
notify => Service['nfs-common']
}
# services
service {'nfs-common':
ensure => 'running',
enable => true,
}
service {'nfs':
name => 'nfs-kernel-server',
ensure => 'running',
enable => true,
}
}
With everything in place, we include the server class to configure the NFS server and then define an export:
include nfs::server
nfs::export {'/srv/home':
tag => "srv_home" }
What's important here is that we defined the tag
attribute, which will be used in the exported resource we collect in the following code snippet:
Mount <<| tag == "srv_home" |>> {
name => '/mnt',
}
We use the spaceship syntax (<<| |>>
) to collect all the exported mount resources that have the tag we defined earlier (srv_home
). We then use a syntax called "override on collect" to modify the name attribute of the mount to specify where to mount the filesystem.
Using this design pattern with exported resources, we can change the server exporting the filesystem and have any nodes that mount the resource updated automatically. We can have many different nodes collecting the exported mount resource.
nfs
server to share (export) some filesystem via NFS.
nfs
module with the following nfs::exports
class, which defines a concat resource:class nfs::exports { exec {'nfs::exportfs': command => 'exportfs -a', refreshonly => true, path => '/usr/bin:/bin:/sbin:/usr/sbin', } concat {'/etc/exports': notify => Exec['nfs::exportfs'], } }
nfs::export
defined type, we'll use this definition for any nfs
exports we create:define nfs::export ( $where = $title, $who = '*', $options = 'async,ro', $mount_options = 'defaults', $tag = 'nfs' ) { # make sure the directory exists # export the entry locally, then export a resource to be picked up later. file {"$where": ensure => 'directory', } include nfs::exports concat::fragment { "nfs::export::$where": content => "${where} ${who}(${options})\n", target => '/etc/exports' } @@mount { "nfs::export::${where}::${::ipaddress}": name => "$where", ensure => 'mounted', fstype => 'nfs', options => "$mount_options", device => "${::ipaddress}:${where}", tag => "$tag", } }
- the
nfs::server
class, which will include the OS-specific configuration for the server:class nfs::server { # ensure nfs server is running # firewall should allow nfs communication include nfs::exports case $::osfamily { 'RedHat': { include nfs::server::redhat } 'Debian': { include nfs::server::debian } } include myfw firewall {'2049 NFS TCP communication': proto => 'tcp', port => '2049', action => 'accept', } firewall {'2049 UDP NFS communication': proto => 'udp', port => '2049', action => 'accept', } firewall {'0111 TCP PORTMAP': proto => 'tcp', port => '111', action => 'accept', } firewall {'0111 UDP PORTMAP': proto => 'udp', port => '111', action => 'accept', } firewall {'4000 TCP STAT': proto => 'tcp', port => '4000-4010', action => 'accept', } firewall {'4000 UDP STAT': proto => 'udp', port => '4000-4010', action => 'accept', } }
- Next, create the
nfs::server::redhat
class:class nfs::server::redhat { package {'nfs-utils': ensure => 'installed', } service {'nfs': ensure => 'running', enable => true } file {'/etc/sysconfig/nfs': source => 'puppet:///modules/nfs/nfs', mode => 0644, notify => Service['nfs'], } }
- Create the
/etc/sysconfig/nfs
support file for RedHat systems in the files directory of ournfs
repo (modules/nfs/files/nfs
):STATD_PORT=4000 STATD_OUTGOING_PORT=4001 RQUOTAD_PORT=4002 LOCKD_TCPPORT=4003 LOCKD_UDPPORT=4003 MOUNTD_PORT=4004
- Now create the support class for Debian systems,
nfs::server::debian
:class nfs::server::debian { # install the package package {'nfs': name => 'nfs-kernel-server', ensure => 'installed', } # config file {'/etc/default/nfs-common': source => 'puppet:///modules/nfs/nfs-common', mode => 0644, notify => Service['nfs-common'] } # services service {'nfs-common': ensure => 'running', enable => true, } service {'nfs': name => 'nfs-kernel-server', ensure => 'running', enable => true, require => Package['nfs-kernel-server'] } }
- Create the nfs-common configuration for Debian (which will be placed in
modules/nfs/files/nfs-common
):STATDOPTS="--port 4000 --outgoing-port 4001"
- Apply the
nfs::server
class to a node and then create an export on that node:node debian { include nfs::server nfs::export {'/srv/home': tag => "srv_home" } }
- Create a collector for the exported resource created by the
nfs::server
class in the preceding code snippet:node cookbook { Mount <<| tag == "srv_home" |>> { name => '/mnt', } }
- Finally, run Puppet on the node Debian to create the exported resource. Then, run Puppet on the cookbook node to mount that resource:
root@debian:~# puppet agent -t Info: Caching catalog for debian.example.com Info: Applying configuration version '1415602532' Notice: Finished catalog run in 0.78 seconds [root@cookbook ~]# puppet agent -t Info: Caching catalog for cookbook.example.com Info: Applying configuration version '1415603580' Notice: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]/ensure: ensure changed 'ghost' to 'mounted' Info: Computing checksum on file /etc/fstab Info: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Scheduling refresh of Mount[nfs::export::/srv/home::192.168.122.148] Info: Mount[nfs::export::/srv/home::192.168.122.148](provider=parsed): Remounting Notice: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Triggered 'refresh' from 1 events Info: /Stage[main]/Main/Node[cookbook]/Mount[nfs::export::/srv/home::192.168.122.148]: Scheduling refresh of Mount[nfs::export::/srv/home::192.168.122.148] Notice: Finished catalog run in 0.34 seconds
- Verify the mount with
mount
:[root@cookbook ~]# mount -t nfs 192.168.122.148:/srv/home on /mnt type nfs (rw)
How it works…
The nfs::exports
class defines an exec, which runs 'exportfs -a'
, to export all filesystems defined in /etc/exports
. Next, we define a concat resource to contain concat::fragments
, which we will define next in our nfs::export
class. Concat resources specify the file that the fragments are to be placed into; /etc/exports
in this case. Our concat
resource has a notify for the previous exec. This has the effect that whenever /etc/exports
is updated, we run 'exportfs -a'
again to export the new entries:
class nfs::exports {
exec {'nfs::exportfs':
command => 'exportfs -a',
refreshonly => true,
path => '/usr/bin:/bin:/sbin:/usr/sbin',
}
concat {'/etc/exports':
notify => Exec['nfs::exportfs'],
}
}
We then created an nfs::export
defined type, which does all the work. The defined type adds an entry to /etc/exports
via a concat::fragment
resource:
define nfs::export (
$where = $title,
$who = '*',
$options = 'async,ro',
$mount_options = 'defaults',
$tag = 'nfs'
) {
# make sure the directory exists
# export the entry locally, then export a resource to be picked up later.
file {"$where":
ensure => 'directory',
}
include nfs::exports
concat::fragment { "nfs::export::$where":
content => "${where} ${who}(${options})\n",
target => '/etc/exports'
}
In the definition, we use the attribute $where
to define what filesystem we are exporting. We use $who
to specify who can mount the filesystem. The attribute $options
contains the exporting options such as
rw (read-write),
ro (read-only). Next, we have the options that will be placed in /etc/fstab
on the client machine, the mount options, stored in $mount_options
. The nfs::exports
class is included here so that concat::fragment
has a concat target defined.
Next, the exported mount resource is created; this is done on the server, so the ${::ipaddress}
variable holds the IP address of the server. We use this to define the device for the mount. The device is the IP address of the server, a colon, and then the filesystem being exported. In this example, it is '192.168.122.148:/srv/home'
:
@@mount { "nfs::export::${where}::${::ipaddress}":
name => "$where",
ensure => 'mounted',
fstype => 'nfs',
options => "$mount_options",
device => "${::ipaddress}:${where}",
tag => "$tag",
}
We reuse our myfw
module and include it in the nfs::server
class. This class illustrates one of the things to consider when writing your modules. Not all Linux distributions are created equal. Debian and RedHat deal with NFS server configuration quite differently. The nfs::server
module deals with this by including OS-specific subclasses:
class nfs::server {
# ensure nfs server is running
# firewall should allow nfs communication
include nfs::exports
case $::osfamily {
'RedHat': { include nfs::server::redhat }
'Debian': { include nfs::server::debian }
}
include myfw
firewall {'2049 NFS TCP communication':
proto => 'tcp',
port => '2049',
action => 'accept',
}
firewall {'2049 UDP NFS communication':
proto => 'udp',
port => '2049',
action => 'accept',
}
firewall {'0111 TCP PORTMAP':
proto => 'tcp',
port => '111',
action => 'accept',
}
firewall {'0111 UDP PORTMAP':
proto => 'udp',
port => '111',
action => 'accept',
}
firewall {'4000 TCP STAT':
proto => 'tcp',
port => '4000-4010',
action => 'accept',
}
firewall {'4000 UDP STAT':
proto => 'udp',
port => '4000-4010',
action => 'accept',
}
}
The nfs::server
module opens several firewall ports for NFS communication. NFS traffic is always carried over port 2049 but ancillary systems, such as locking, quota, and file status daemons, use ephemeral ports chosen by the portmapper, by default. The portmapper itself uses port 111. So our module needs to allow 2049, 111, and a few other ports. We attempt to configure the ancillary services to use ports 4000 through 4010.
In the nfs::server::redhat
class, we modify /etc/sysconfig/nfs
to use the ports specified. Also, we install the nfs-utils package and start the nfs service:
class nfs::server::redhat {
package {'nfs-utils':
ensure => 'installed',
}
service {'nfs':
ensure => 'running',
enable => true
}
file {'/etc/sysconfig/nfs':
source => 'puppet:///modules/nfs/nfs',
mode => 0644,
notify => Service['nfs'],
}
}
We do the same for Debian-based systems in the nfs::server::debian
class. The packages and services have different names but overall the process is similar:
class nfs::server::debian {
# install the package
package {'nfs':
name => 'nfs-kernel-server',
ensure => 'installed',
}
# config
file {'/etc/default/nfs-common':
source => 'puppet:///modules/nfs/nfs-common',
mode => 0644,
notify => Service['nfs-common']
}
# services
service {'nfs-common':
ensure => 'running',
enable => true,
}
service {'nfs':
name => 'nfs-kernel-server',
ensure => 'running',
enable => true,
}
}
With everything in place, we include the server class to configure the NFS server and then define an export:
include nfs::server
nfs::export {'/srv/home':
tag => "srv_home" }
What's important here is that we defined the tag
attribute, which will be used in the exported resource we collect in the following code snippet:
Mount <<| tag == "srv_home" |>> {
name => '/mnt',
}
We use the spaceship syntax (<<| |>>
) to collect all the exported mount resources that have the tag we defined earlier (srv_home
). We then use a syntax called "override on collect" to modify the name attribute of the mount to specify where to mount the filesystem.
Using this design pattern with exported resources, we can change the server exporting the filesystem and have any nodes that mount the resource updated automatically. We can have many different nodes collecting the exported mount resource.
nfs::exports
class
defines an exec, which runs 'exportfs -a'
, to export all filesystems defined in /etc/exports
. Next, we define a concat resource to contain concat::fragments
, which we will define next in our nfs::export
class. Concat resources specify the file that the fragments are to be placed into; /etc/exports
in this case. Our concat
resource has a notify for the previous exec. This has the effect that whenever /etc/exports
is updated, we run 'exportfs -a'
again to export the new entries:
class nfs::exports {
exec {'nfs::exportfs':
command => 'exportfs -a',
refreshonly => true,
path => '/usr/bin:/bin:/sbin:/usr/sbin',
}
concat {'/etc/exports':
notify => Exec['nfs::exportfs'],
}
}
We then created an nfs::export
defined type, which does all the work. The defined type adds an entry to /etc/exports
via a concat::fragment
resource:
define nfs::export (
$where = $title,
$who = '*',
$options = 'async,ro',
$mount_options = 'defaults',
$tag = 'nfs'
) {
# make sure the directory exists
# export the entry locally, then export a resource to be picked up later.
file {"$where":
ensure => 'directory',
}
include nfs::exports
concat::fragment { "nfs::export::$where":
content => "${where} ${who}(${options})\n",
target => '/etc/exports'
}
In the definition, we use the attribute $where
to define what filesystem we are exporting. We use $who
to specify who can mount the filesystem. The attribute $options
contains the exporting options such as
rw (read-write),
ro (read-only). Next, we have the options that will be placed in /etc/fstab
on the client machine, the mount options, stored in $mount_options
. The nfs::exports
class is included here so that concat::fragment
has a concat target defined.
Next, the exported mount resource is created; this is done on the server, so the ${::ipaddress}
variable holds the IP address of the server. We use this to define the device for the mount. The device is the IP address of the server, a colon, and then the filesystem being exported. In this example, it is '192.168.122.148:/srv/home'
:
@@mount { "nfs::export::${where}::${::ipaddress}":
name => "$where",
ensure => 'mounted',
fstype => 'nfs',
options => "$mount_options",
device => "${::ipaddress}:${where}",
tag => "$tag",
}
We reuse our myfw
module and include it in the nfs::server
class. This class illustrates one of the things to consider when writing your modules. Not all Linux distributions are created equal. Debian and RedHat deal with NFS server configuration quite differently. The nfs::server
module deals with this by including OS-specific subclasses:
class nfs::server {
# ensure nfs server is running
# firewall should allow nfs communication
include nfs::exports
case $::osfamily {
'RedHat': { include nfs::server::redhat }
'Debian': { include nfs::server::debian }
}
include myfw
firewall {'2049 NFS TCP communication':
proto => 'tcp',
port => '2049',
action => 'accept',
}
firewall {'2049 UDP NFS communication':
proto => 'udp',
port => '2049',
action => 'accept',
}
firewall {'0111 TCP PORTMAP':
proto => 'tcp',
port => '111',
action => 'accept',
}
firewall {'0111 UDP PORTMAP':
proto => 'udp',
port => '111',
action => 'accept',
}
firewall {'4000 TCP STAT':
proto => 'tcp',
port => '4000-4010',
action => 'accept',
}
firewall {'4000 UDP STAT':
proto => 'udp',
port => '4000-4010',
action => 'accept',
}
}
The nfs::server
module opens several firewall ports for NFS communication. NFS traffic is always carried over port 2049 but ancillary systems, such as locking, quota, and file status daemons, use ephemeral ports chosen by the portmapper, by default. The portmapper itself uses port 111. So our module needs to allow 2049, 111, and a few other ports. We attempt to configure the ancillary services to use ports 4000 through 4010.
In the nfs::server::redhat
class, we modify /etc/sysconfig/nfs
to use the ports specified. Also, we install the nfs-utils package and start the nfs service:
class nfs::server::redhat {
package {'nfs-utils':
ensure => 'installed',
}
service {'nfs':
ensure => 'running',
enable => true
}
file {'/etc/sysconfig/nfs':
source => 'puppet:///modules/nfs/nfs',
mode => 0644,
notify => Service['nfs'],
}
}
We do the same for Debian-based systems in the nfs::server::debian
class. The packages and services have different names but overall the process is similar:
class nfs::server::debian {
# install the package
package {'nfs':
name => 'nfs-kernel-server',
ensure => 'installed',
}
# config
file {'/etc/default/nfs-common':
source => 'puppet:///modules/nfs/nfs-common',
mode => 0644,
notify => Service['nfs-common']
}
# services
service {'nfs-common':
ensure => 'running',
enable => true,
}
service {'nfs':
name => 'nfs-kernel-server',
ensure => 'running',
enable => true,
}
}
With everything in place, we include the server class to configure the NFS server and then define an export:
include nfs::server
nfs::export {'/srv/home':
tag => "srv_home" }
What's important here is that we defined the tag
attribute, which will be used in the exported resource we collect in the following code snippet:
Mount <<| tag == "srv_home" |>> {
name => '/mnt',
}
We use the spaceship syntax (<<| |>>
) to collect all the exported mount resources that have the tag we defined earlier (srv_home
). We then use a syntax called "override on collect" to modify the name attribute of the mount to specify where to mount the filesystem.
Using this design pattern with exported resources, we can change the server exporting the filesystem and have any nodes that mount the resource updated automatically. We can have many different nodes collecting the exported mount resource.
Using HAProxy to load-balance multiple web servers
Load balancers are used to spread a load among a number of servers. Hardware load balancers are still somewhat expensive, whereas software balancers can achieve most of the benefits of a hardware solution.
HAProxy is the software load balancer of choice for most people: fast, powerful, and highly configurable.
How to do it…
In this recipe, I'll show you how to build an HAProxy server to load-balance web requests across web servers. We'll use exported resources to build the haproxy
configuration file just like we did for the NFS example.
- Create the file
modules/haproxy/manifests/master.pp
with the following contents:class haproxy::master ($app = 'myapp') { # The HAProxy master server # will collect haproxy::slave resources and add to its balancer package { 'haproxy': ensure => installed } service { 'haproxy': ensure => running, enable => true, require => Package['haproxy'], } include haproxy::config concat::fragment { 'haproxy.cfg header': target => 'haproxy.cfg', source => 'puppet:///modules/haproxy/haproxy.cfg', order => '001', require => Package['haproxy'], notify => Service['haproxy'], } # pull in the exported entries Concat::Fragment <<| tag == "$app" |>> { target => 'haproxy.cfg', notify => Service['haproxy'], } }
- Create the file
modules/haproxy/files/haproxy.cfg
with the following contents:global daemon user haproxy group haproxy pidfile /var/run/haproxy.pid defaults log global stats enable mode http option httplog option dontlognull option dontlog-normal retries 3 option redispatch timeout connect 4000 timeout client 60000 timeout server 30000 listen stats :8080 mode http stats uri / stats auth haproxy:topsecret listen myapp 0.0.0.0:80 balance leastconn
- Modify your
manifests/nodes.pp
file as follows:node 'cookbook' { include haproxy }
- Create the slave server configuration in the
haproxy::slave
class:class haproxy::slave ($app = "myapp", $localport = 8000) { # haproxy slave, export haproxy.cfg fragment # configure simple web server on different port @@concat::fragment { "haproxy.cfg $::fqdn": content => "\t\tserver ${::hostname} ${::ipaddress}:${localport} check maxconn 100\n", order => '0010', tag => "$app", } include myfw firewall {"${localport} Allow HTTP to haproxy::slave": proto => 'tcp', port => $localport, action => 'accept', } class {'apache': } apache::vhost { 'haproxy.example.com': port => '8000', docroot => '/var/www/haproxy', } file {'/var/www/haproxy': ensure => 'directory', mode => 0755, require => Class['apache'], } file {'/var/www/haproxy/index.html': mode => '0644', content => "<html><body><h1>${::fqdn} haproxy::slave\n</body></html>\n", require => File['/var/www/haproxy'], } }
- Create the
concat
container resource in thehaproxy::config
class as follows:class haproxy::config { concat {'haproxy.cfg': path => '/etc/haproxy/haproxy.cfg', order => 'numeric', mode => '0644', } }
- Modify
site.pp
to define the master and slave nodes:node master { class {'haproxy::master': app => 'cookbook' } } node slave1,slave2 { class {'haproxy::slave': app => 'cookbook' } }
- Run Puppet on each of the slave servers:
root@slave1:~# puppet agent -t Info: Caching catalog for slave1 Info: Applying configuration version '1415646194' Notice: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf]/ensure: created Info: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf]: Scheduling refresh of Service[httpd] Notice: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf symlink]/ensure: created Info: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf symlink]: Scheduling refresh of Service[httpd] Notice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Apache::Service/Service[httpd]: Unscheduling refresh on Service[httpd] Notice: Finished catalog run in 1.71 seconds
- Run Puppet on the master node to configure and run
haproxy
:[root@master ~]# puppet agent -t Info: Caching catalog for master.example.com Info: Applying configuration version '1415647075' Notice: /Stage[main]/Haproxy::Master/Package[haproxy]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0000 Allow all traffic on loopback]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /Stage[main]/Haproxy::Master/Firewall[8080 haproxy statistics]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Haproxy::Master/Firewall[0080 http haproxy]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]/content: ... +listen myapp 0.0.0.0:80 + balance leastconn + server slave1 192.168.122.148:8000 check maxconn 100 + server slave2 192.168.122.133:8000 check maxconn 100 Info: Computing checksum on file /etc/haproxy/haproxy.cfg Info: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810 Notice: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}b070f076e1e691e053d6853f7d966394' Notice: /Stage[main]/Haproxy::Master/Service[haproxy]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Haproxy::Master/Service[haproxy]: Unscheduling refresh on Service[haproxy] Notice: Finished catalog run in 33.48 seconds
- Check the HAProxy stats interface on master port
8080
in your web browser (http://master.example.com:8080
) to make sure everything is okay (The username and password are inhaproxy.cfg
,haproxy
, andtopsecret
). Try going to the proxied service as well. Notice that the page changes on each reload as the service is redirected from slave1 to slave2 (http://master.example.com
).
How it works…
We built a complex configuration from various components of the previous sections. This type of deployment becomes easier the more you do it. At a top level, we configured the master to collect exported resources from slaves. The slaves exported their configuration information to allow haproxy to use them in the load balancer. As slaves are added to the system, they can export their resources and be added to the balancer automatically.
We used our myfw
module to configure the firewall on the slaves and the master to allow communication.
We used the Forge Apache module to configure the listening web server on the slaves. We were able to generate a fully functioning website with five lines of code (10 more to place index.html
on the website).
There are several things going on here. We have the firewall configuration and the Apache configuration in addition to the haproxy
configuration. We'll focus on how the exported resources and the haproxy
configuration fit together.
In the haproxy::config
class, we created the concat container for the haproxy
configuration:
class haproxy::config {
concat {'haproxy.cfg':
path => '/etc/haproxy/haproxy.cfg',
order => 'numeric',
mode => 0644,
}
}
We reference this in haproxy::slave
:
class haproxy::slave ($app = "myapp", $localport = 8000) {
# haproxy slave, export haproxy.cfg fragment
# configure simple web server on different port
@@concat::fragment { "haproxy.cfg $::fqdn":
content => "\t\tserver ${::hostname} ${::ipaddress}:${localport} check maxconn 100\n",
order => '0010',
tag => "$app",
}
We are doing a little trick here with concat; we don't define the target in the exported resource. If we did, the slaves would try and create a /etc/haproxy/haproxy.cfg
file, but the slaves do not have haproxy
installed so we would get catalog failures. What we do is modify the resource when we collect it in haproxy::master
:
# pull in the exported entries
Concat::Fragment <<| tag == "$app" |>> {
target => 'haproxy.cfg',
notify => Service['haproxy'],
}
In addition to adding the target when we collect the resource, we also add a notify so that the haproxy
service is restarted when we add a new host to the configuration. Another important point here is that we set the order attribute of the slave configurations to 0010, when we define the header for the haproxy.cfg
file; we use an order value of 0001 to ensure that the header is placed at the beginning of the file:
concat::fragment { 'haproxy.cfg header':
target => 'haproxy.cfg',
source => 'puppet:///modules/haproxy/haproxy.cfg',
order => '001',
require => Package['haproxy'],
notify => Service['haproxy'],
}
The rest of the haproxy::master
class is concerned with configuring the firewall as we did in previous examples.
There's more...
HAProxy has a vast range of configuration parameters, which you can explore; see the HAProxy website at http://haproxy.1wt.eu/#docs.
Although it's most often used as a web server, HAProxy can proxy a lot more than just HTTP. It can handle any kind of TCP traffic, so you can use it to balance the load of MySQL servers, SMTP, video servers, or anything you like.
You can use the design we showed to attack many problems of coordination of services between multiple servers. This type of interaction is very common; you can apply it to many configurations for load balancing or distributed systems. You can use the same workflow described previously to have nodes export firewall resources (@@firewall
) to permit their own access.
show you how to build an HAProxy server to load-balance web requests across web servers. We'll use exported resources to build the haproxy
configuration file just like we did for the NFS example.
- Create the file
modules/haproxy/manifests/master.pp
with the following contents:class haproxy::master ($app = 'myapp') { # The HAProxy master server # will collect haproxy::slave resources and add to its balancer package { 'haproxy': ensure => installed } service { 'haproxy': ensure => running, enable => true, require => Package['haproxy'], } include haproxy::config concat::fragment { 'haproxy.cfg header': target => 'haproxy.cfg', source => 'puppet:///modules/haproxy/haproxy.cfg', order => '001', require => Package['haproxy'], notify => Service['haproxy'], } # pull in the exported entries Concat::Fragment <<| tag == "$app" |>> { target => 'haproxy.cfg', notify => Service['haproxy'], } }
- Create the file
modules/haproxy/files/haproxy.cfg
with the following contents:global daemon user haproxy group haproxy pidfile /var/run/haproxy.pid defaults log global stats enable mode http option httplog option dontlognull option dontlog-normal retries 3 option redispatch timeout connect 4000 timeout client 60000 timeout server 30000 listen stats :8080 mode http stats uri / stats auth haproxy:topsecret listen myapp 0.0.0.0:80 balance leastconn
- Modify your
manifests/nodes.pp
file as follows:node 'cookbook' { include haproxy }
- Create the slave server configuration in the
haproxy::slave
class:class haproxy::slave ($app = "myapp", $localport = 8000) { # haproxy slave, export haproxy.cfg fragment # configure simple web server on different port @@concat::fragment { "haproxy.cfg $::fqdn": content => "\t\tserver ${::hostname} ${::ipaddress}:${localport} check maxconn 100\n", order => '0010', tag => "$app", } include myfw firewall {"${localport} Allow HTTP to haproxy::slave": proto => 'tcp', port => $localport, action => 'accept', } class {'apache': } apache::vhost { 'haproxy.example.com': port => '8000', docroot => '/var/www/haproxy', } file {'/var/www/haproxy': ensure => 'directory', mode => 0755, require => Class['apache'], } file {'/var/www/haproxy/index.html': mode => '0644', content => "<html><body><h1>${::fqdn} haproxy::slave\n</body></html>\n", require => File['/var/www/haproxy'], } }
- Create the
concat
container resource in thehaproxy::config
class as follows:class haproxy::config { concat {'haproxy.cfg': path => '/etc/haproxy/haproxy.cfg', order => 'numeric', mode => '0644', } }
- Modify
site.pp
to define the master and slave nodes:node master { class {'haproxy::master': app => 'cookbook' } } node slave1,slave2 { class {'haproxy::slave': app => 'cookbook' } }
- Run Puppet on each of the slave servers:
root@slave1:~# puppet agent -t Info: Caching catalog for slave1 Info: Applying configuration version '1415646194' Notice: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf]/ensure: created Info: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf]: Scheduling refresh of Service[httpd] Notice: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf symlink]/ensure: created Info: /Stage[main]/Haproxy::Slave/Apache::Vhost[haproxy.example.com]/File[25-haproxy.example.com.conf symlink]: Scheduling refresh of Service[httpd] Notice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Apache::Service/Service[httpd]: Unscheduling refresh on Service[httpd] Notice: Finished catalog run in 1.71 seconds
- Run Puppet on the master node to configure and run
haproxy
:[root@master ~]# puppet agent -t Info: Caching catalog for master.example.com Info: Applying configuration version '1415647075' Notice: /Stage[main]/Haproxy::Master/Package[haproxy]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0000 Allow all traffic on loopback]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0001 Allow all ICMP]/ensure: created Notice: /Stage[main]/Haproxy::Master/Firewall[8080 haproxy statistics]/ensure: created Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /Stage[main]/Myfw::Pre/Firewall[0022 Allow all TCP on port 22 (ssh)]/ensure: created Notice: /Stage[main]/Haproxy::Master/Firewall[0080 http haproxy]/ensure: created Notice: /Stage[main]/Myfw::Pre/Firewall[0002 Allow all established traffic]/ensure: created Notice: /Stage[main]/Myfw::Post/Firewall[9999 Drop all other traffic]/ensure: created Notice: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]/content: ... +listen myapp 0.0.0.0:80 + balance leastconn + server slave1 192.168.122.148:8000 check maxconn 100 + server slave2 192.168.122.133:8000 check maxconn 100 Info: Computing checksum on file /etc/haproxy/haproxy.cfg Info: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810 Notice: /Stage[main]/Haproxy::Config/Concat[haproxy.cfg]/File[haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}b070f076e1e691e053d6853f7d966394' Notice: /Stage[main]/Haproxy::Master/Service[haproxy]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Haproxy::Master/Service[haproxy]: Unscheduling refresh on Service[haproxy] Notice: Finished catalog run in 33.48 seconds
- Check the HAProxy stats interface on master port
8080
in your web browser (http://master.example.com:8080
) to make sure everything is okay (The username and password are inhaproxy.cfg
,haproxy
, andtopsecret
). Try going to the proxied service as well. Notice that the page changes on each reload as the service is redirected from slave1 to slave2 (http://master.example.com
).
How it works…
We built a complex configuration from various components of the previous sections. This type of deployment becomes easier the more you do it. At a top level, we configured the master to collect exported resources from slaves. The slaves exported their configuration information to allow haproxy to use them in the load balancer. As slaves are added to the system, they can export their resources and be added to the balancer automatically.
We used our myfw
module to configure the firewall on the slaves and the master to allow communication.
We used the Forge Apache module to configure the listening web server on the slaves. We were able to generate a fully functioning website with five lines of code (10 more to place index.html
on the website).
There are several things going on here. We have the firewall configuration and the Apache configuration in addition to the haproxy
configuration. We'll focus on how the exported resources and the haproxy
configuration fit together.
In the haproxy::config
class, we created the concat container for the haproxy
configuration:
class haproxy::config {
concat {'haproxy.cfg':
path => '/etc/haproxy/haproxy.cfg',
order => 'numeric',
mode => 0644,
}
}
We reference this in haproxy::slave
:
class haproxy::slave ($app = "myapp", $localport = 8000) {
# haproxy slave, export haproxy.cfg fragment
# configure simple web server on different port
@@concat::fragment { "haproxy.cfg $::fqdn":
content => "\t\tserver ${::hostname} ${::ipaddress}:${localport} check maxconn 100\n",
order => '0010',
tag => "$app",
}
We are doing a little trick here with concat; we don't define the target in the exported resource. If we did, the slaves would try and create a /etc/haproxy/haproxy.cfg
file, but the slaves do not have haproxy
installed so we would get catalog failures. What we do is modify the resource when we collect it in haproxy::master
:
# pull in the exported entries
Concat::Fragment <<| tag == "$app" |>> {
target => 'haproxy.cfg',
notify => Service['haproxy'],
}
In addition to adding the target when we collect the resource, we also add a notify so that the haproxy
service is restarted when we add a new host to the configuration. Another important point here is that we set the order attribute of the slave configurations to 0010, when we define the header for the haproxy.cfg
file; we use an order value of 0001 to ensure that the header is placed at the beginning of the file:
concat::fragment { 'haproxy.cfg header':
target => 'haproxy.cfg',
source => 'puppet:///modules/haproxy/haproxy.cfg',
order => '001',
require => Package['haproxy'],
notify => Service['haproxy'],
}
The rest of the haproxy::master
class is concerned with configuring the firewall as we did in previous examples.
There's more...
HAProxy has a vast range of configuration parameters, which you can explore; see the HAProxy website at http://haproxy.1wt.eu/#docs.
Although it's most often used as a web server, HAProxy can proxy a lot more than just HTTP. It can handle any kind of TCP traffic, so you can use it to balance the load of MySQL servers, SMTP, video servers, or anything you like.
You can use the design we showed to attack many problems of coordination of services between multiple servers. This type of interaction is very common; you can apply it to many configurations for load balancing or distributed systems. You can use the same workflow described previously to have nodes export firewall resources (@@firewall
) to permit their own access.
myfw
module to configure the firewall on the slaves and the master to allow communication.
index.html
on the website).
several things going on here. We have the firewall configuration and the Apache configuration in addition to the haproxy
configuration. We'll focus on how the exported resources and the haproxy
configuration fit together.
In the haproxy::config
class, we created the concat container for the haproxy
configuration:
class haproxy::config {
concat {'haproxy.cfg':
path => '/etc/haproxy/haproxy.cfg',
order => 'numeric',
mode => 0644,
}
}
We reference this in haproxy::slave
:
class haproxy::slave ($app = "myapp", $localport = 8000) {
# haproxy slave, export haproxy.cfg fragment
# configure simple web server on different port
@@concat::fragment { "haproxy.cfg $::fqdn":
content => "\t\tserver ${::hostname} ${::ipaddress}:${localport} check maxconn 100\n",
order => '0010',
tag => "$app",
}
We are doing a little trick here with concat; we don't define the target in the exported resource. If we did, the slaves would try and create a /etc/haproxy/haproxy.cfg
file, but the slaves do not have haproxy
installed so we would get catalog failures. What we do is modify the resource when we collect it in haproxy::master
:
# pull in the exported entries
Concat::Fragment <<| tag == "$app" |>> {
target => 'haproxy.cfg',
notify => Service['haproxy'],
}
In addition to adding the target when we collect the resource, we also add a notify so that the haproxy
service is restarted when we add a new host to the configuration. Another important point here is that we set the order attribute of the slave configurations to 0010, when we define the header for the haproxy.cfg
file; we use an order value of 0001 to ensure that the header is placed at the beginning of the file:
concat::fragment { 'haproxy.cfg header':
target => 'haproxy.cfg',
source => 'puppet:///modules/haproxy/haproxy.cfg',
order => '001',
require => Package['haproxy'],
notify => Service['haproxy'],
}
The rest of the haproxy::master
class is concerned with configuring the firewall as we did in previous examples.
There's more...
HAProxy has a vast range of configuration parameters, which you can explore; see the HAProxy website at http://haproxy.1wt.eu/#docs.
Although it's most often used as a web server, HAProxy can proxy a lot more than just HTTP. It can handle any kind of TCP traffic, so you can use it to balance the load of MySQL servers, SMTP, video servers, or anything you like.
You can use the design we showed to attack many problems of coordination of services between multiple servers. This type of interaction is very common; you can apply it to many configurations for load balancing or distributed systems. You can use the same workflow described previously to have nodes export firewall resources (@@firewall
) to permit their own access.
website at http://haproxy.1wt.eu/#docs.
Although it's most often used as a web server, HAProxy can proxy a lot more than just HTTP. It can handle any kind of TCP traffic, so you can use it to balance the load of MySQL servers, SMTP, video servers, or anything you like.
You can use the design we showed to attack many problems of coordination of services between multiple servers. This type of interaction is very common; you can apply it to many configurations for load balancing or distributed systems. You can use the same workflow described previously to have nodes export firewall resources (@@firewall
) to permit their own access.
Managing Docker with Puppet
Docker is a platform for rapid deployment of containers. Containers are like a lightweight virtual machine that might only run a single process. The containers in Docker are called docks and are configured with files called Dockerfiles. Puppet can be used to configure a node to not only run Docker but also configure and start several docks. You can then use Puppet to ensure that your docks are running and are consistently configured.
Getting ready
Download and install the Puppet Docker module from the Forge (https://forge.puppetlabs.com/garethr/docker):
t@mylaptop ~ $ cd puppet t@mylaptop ~/puppet $ puppet module install -i modules garethr-docker Notice: Preparing to install into /home/thomas/puppet/modules ... Notice: Downloading from https://forgeapi.puppetlabs.com ... Notice: Installing -- do not interrupt ... /home/thomas/puppet/modules └─┬ garethr-docker (v3.3.0) ├── puppetlabs-apt (v1.7.0) ├── puppetlabs-stdlib (v4.3.2) └── stahnma-epel (v1.0.2)
Add these modules to your Puppet repository. The stahnma-epel
module is required for Enterprise Linux-based distributions; it contains the Extra Packages for Enterprise Linux YUM repository.
How to do it...
Perform the following steps to manage Docker with Puppet:
- To install Docker on a node, we just need to include the
docker
class. We'll do more than install Docker; we'll also download an image and start an application on our test node. In this example, we'll create a new machine calledshipyard.
Add the following node definition tosite.pp
:node shipyard { class {'docker': } docker::image {'phusion/baseimage': } docker::run {'cookbook': image => 'phusion/baseimage', expose => '8080', ports => '8080', command => 'nc -k -l 8080', } }
- Run Puppet on your shipyard node to install Docker. This will also download the
phusion/baseimage docker
image:[root@shipyard ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for shipyard Info: Applying configuration version '1421049252' Notice: /Stage[main]/Epel/File[/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6]/ensure: defined content as '{md5}d865e6b948a74cb03bc3401c0b01b785' Notice: /Stage[main]/Epel/Epel::Rpm_gpg_key[EPEL-6]/Exec[import-EPEL-6]/returns: executed successfully ... Notice: /Stage[main]/Docker::Install/Package[docker]/ensure: created ... Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]/ensure: created Info: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]: Scheduling refresh of Service[docker-cookbook] Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/Service[docker-cookbook]: Triggered 'refresh' from 1 events
- Verify that your container is running on shipyard using
docker ps
:[root@shipyard ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6f5b799a598 phusion/baseimage:0.9.15 "/bin/nc -l 8080" About a minute ago Up About a minute 0.0.0.0:49157->8080/tcp suspicious_hawking
- Verify that the dock is running netcat on port 8080 by connecting to the port listed previously (
49157
):[root@shipyard ~]# nc -v localhost 49157 Connection to localhost 49157 port [tcp/*] succeeded!
How it works...
We began by installing the docker module from the Forge. This module installs the docker-io
package on our node, along with any required dependencies.
We then defined a docker::image
resource. This instructs Puppet to ensure that the named image is downloaded and available to docker. On our first run, Puppet will make docker download the image. We used phusion/baseimage
as our example because it is quite small, well-known, and includes the netcat daemon we used in the example. More information on baseimage
can be found at http://phusion.github.io/baseimage-docker/.
We then went on to define a docker::run
resource. This example isn't terribly useful; it simply starts netcat in listen mode on port 8080. We need to expose that port to our machine, so we define the expose attribute of our docker::run
resource. There are many other options available for the docker::run
resource. Refer to the source code for more details.
We then used docker ps to list the running docks on our shipyard machine. We parsed out the listening port on our local machine and verified that netcat was listening.
There's more...
Docker is a great tool for rapid deployment and development. You can spin as many docks as you need on even the most modest hardware. One great use for docker is having docks act as test nodes for your modules. You can create a docker image, which includes Puppet, and then have Puppet run within the dock. For more information on docker, visit http://www.docker.com/.
module from the Forge (https://forge.puppetlabs.com/garethr/docker):
t@mylaptop ~ $ cd puppet t@mylaptop ~/puppet $ puppet module install -i modules garethr-docker Notice: Preparing to install into /home/thomas/puppet/modules ... Notice: Downloading from https://forgeapi.puppetlabs.com ... Notice: Installing -- do not interrupt ... /home/thomas/puppet/modules └─┬ garethr-docker (v3.3.0) ├── puppetlabs-apt (v1.7.0) ├── puppetlabs-stdlib (v4.3.2) └── stahnma-epel (v1.0.2)
Add these modules to your Puppet repository. The stahnma-epel
module is required for Enterprise Linux-based distributions; it contains the Extra Packages for Enterprise Linux YUM repository.
How to do it...
Perform the following steps to manage Docker with Puppet:
- To install Docker on a node, we just need to include the
docker
class. We'll do more than install Docker; we'll also download an image and start an application on our test node. In this example, we'll create a new machine calledshipyard.
Add the following node definition tosite.pp
:node shipyard { class {'docker': } docker::image {'phusion/baseimage': } docker::run {'cookbook': image => 'phusion/baseimage', expose => '8080', ports => '8080', command => 'nc -k -l 8080', } }
- Run Puppet on your shipyard node to install Docker. This will also download the
phusion/baseimage docker
image:[root@shipyard ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for shipyard Info: Applying configuration version '1421049252' Notice: /Stage[main]/Epel/File[/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6]/ensure: defined content as '{md5}d865e6b948a74cb03bc3401c0b01b785' Notice: /Stage[main]/Epel/Epel::Rpm_gpg_key[EPEL-6]/Exec[import-EPEL-6]/returns: executed successfully ... Notice: /Stage[main]/Docker::Install/Package[docker]/ensure: created ... Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]/ensure: created Info: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]: Scheduling refresh of Service[docker-cookbook] Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/Service[docker-cookbook]: Triggered 'refresh' from 1 events
- Verify that your container is running on shipyard using
docker ps
:[root@shipyard ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6f5b799a598 phusion/baseimage:0.9.15 "/bin/nc -l 8080" About a minute ago Up About a minute 0.0.0.0:49157->8080/tcp suspicious_hawking
- Verify that the dock is running netcat on port 8080 by connecting to the port listed previously (
49157
):[root@shipyard ~]# nc -v localhost 49157 Connection to localhost 49157 port [tcp/*] succeeded!
How it works...
We began by installing the docker module from the Forge. This module installs the docker-io
package on our node, along with any required dependencies.
We then defined a docker::image
resource. This instructs Puppet to ensure that the named image is downloaded and available to docker. On our first run, Puppet will make docker download the image. We used phusion/baseimage
as our example because it is quite small, well-known, and includes the netcat daemon we used in the example. More information on baseimage
can be found at http://phusion.github.io/baseimage-docker/.
We then went on to define a docker::run
resource. This example isn't terribly useful; it simply starts netcat in listen mode on port 8080. We need to expose that port to our machine, so we define the expose attribute of our docker::run
resource. There are many other options available for the docker::run
resource. Refer to the source code for more details.
We then used docker ps to list the running docks on our shipyard machine. We parsed out the listening port on our local machine and verified that netcat was listening.
There's more...
Docker is a great tool for rapid deployment and development. You can spin as many docks as you need on even the most modest hardware. One great use for docker is having docks act as test nodes for your modules. You can create a docker image, which includes Puppet, and then have Puppet run within the dock. For more information on docker, visit http://www.docker.com/.
docker
class. We'll do more than install Docker; we'll also download an image and start an application on our test node. In this example, we'll create a new machine called shipyard.
Add the following node definition to site.pp
:node shipyard { class {'docker': } docker::image {'phusion/baseimage': } docker::run {'cookbook': image => 'phusion/baseimage', expose => '8080', ports => '8080', command => 'nc -k -l 8080', } }
phusion/baseimage docker
image:[root@shipyard ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for shipyard Info: Applying configuration version '1421049252' Notice: /Stage[main]/Epel/File[/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6]/ensure: defined content as '{md5}d865e6b948a74cb03bc3401c0b01b785' Notice: /Stage[main]/Epel/Epel::Rpm_gpg_key[EPEL-6]/Exec[import-EPEL-6]/returns: executed successfully ... Notice: /Stage[main]/Docker::Install/Package[docker]/ensure: created ... Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]/ensure: created Info: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/File[/etc/init.d/docker-cookbook]: Scheduling refresh of Service[docker-cookbook] Notice: /Stage[main]/Main/Node[shipyard]/Docker::Run[cookbook]/Service[docker-cookbook]: Triggered 'refresh' from 1 events
docker ps
:[root@shipyard ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6f5b799a598 phusion/baseimage:0.9.15 "/bin/nc -l 8080" About a minute ago Up About a minute 0.0.0.0:49157->8080/tcp suspicious_hawking
49157
):[root@shipyard ~]# nc -v localhost 49157 Connection to localhost 49157 port [tcp/*] succeeded!
How it works...
We began by installing the docker module from the Forge. This module installs the docker-io
package on our node, along with any required dependencies.
We then defined a docker::image
resource. This instructs Puppet to ensure that the named image is downloaded and available to docker. On our first run, Puppet will make docker download the image. We used phusion/baseimage
as our example because it is quite small, well-known, and includes the netcat daemon we used in the example. More information on baseimage
can be found at http://phusion.github.io/baseimage-docker/.
We then went on to define a docker::run
resource. This example isn't terribly useful; it simply starts netcat in listen mode on port 8080. We need to expose that port to our machine, so we define the expose attribute of our docker::run
resource. There are many other options available for the docker::run
resource. Refer to the source code for more details.
We then used docker ps to list the running docks on our shipyard machine. We parsed out the listening port on our local machine and verified that netcat was listening.
There's more...
Docker is a great tool for rapid deployment and development. You can spin as many docks as you need on even the most modest hardware. One great use for docker is having docks act as test nodes for your modules. You can create a docker image, which includes Puppet, and then have Puppet run within the dock. For more information on docker, visit http://www.docker.com/.
began by installing the docker module from the Forge. This module installs the docker-io
package on our node, along with any required dependencies.
We then defined a docker::image
resource. This instructs Puppet to ensure that the named image is downloaded and available to docker. On our first run, Puppet will make docker download the image. We used phusion/baseimage
as our example because it is quite small, well-known, and includes the netcat daemon we used in the example. More information on baseimage
can be found at http://phusion.github.io/baseimage-docker/.
We then went on to define a docker::run
resource. This example isn't terribly useful; it simply starts netcat in listen mode on port 8080. We need to expose that port to our machine, so we define the expose attribute of our docker::run
resource. There are many other options available for the docker::run
resource. Refer to the source code for more details.
We then used docker ps to list the running docks on our shipyard machine. We parsed out the listening port on our local machine and verified that netcat was listening.
There's more...
Docker is a great tool for rapid deployment and development. You can spin as many docks as you need on even the most modest hardware. One great use for docker is having docks act as test nodes for your modules. You can create a docker image, which includes Puppet, and then have Puppet run within the dock. For more information on docker, visit http://www.docker.com/.
is a great tool for rapid deployment and development. You can spin as many docks as you need on even the most modest hardware. One great use for docker is having docks act as test nodes for your modules. You can create a docker image, which includes Puppet, and then have Puppet run within the dock. For more information on docker, visit http://www.docker.com/.