Wednesday, June 12, 2013

CentOS:: BIND:: Forcing a refresh of a BIND DNS Slave

A quicky, before I forget,

When running a pair of BIND DNS servers in master/slave configuration, you will sometimes need to force a fresh to the slave if you don't want to wait for the automatice transfer to happen. To do this, simply run the following command,

# rndc refresh zone_name

Where the zone_name = or if you want to force it, without checking the serial,

# rndc retransfer zone_name

Tuesday, June 11, 2013

Forcing a restart of VM on XenCenter


  • So one of our vms where running at 100% and with no available memory, as OOM kicked in... 

On XenCenter you have two options, either restart the machine gracefully or hard. I first tried to do it gracefully, but ran into a issue, as it was taking nearly 20mins for the restart command to start. Then I decided to right-click force shutdown, but this did nothing. It seems that when XenCenter wants to gracefully restart it connects to the machine to initiate the shutdown/restart command. If the vm is hung then obviously this process will sit at wait.

Once option would be to forcefully kill the tasks associated with this. This can be done via the command option on the XenCenter server,

# xe task-list
uuid ( RO)                : d4c42a68-7a97-f774-6fe6-ec8b1b1d03b9
          name-label ( RO): Connection to VM console
    name-description ( RO): 
              status ( RO): pending
            progress ( RO): 0.000
# xe task-cancel force=true uuid=d4c42a68-7a97-f774-6fe6-ec8b1b1d03b9

If this doesn't work, then you can always restart the xe-toolstack with the following command

# xe-toolstack-restart
Stopping xapi: ..                                          [  OK  ]
Stopping the v6 licensing daemon:                          [  OK  ]
Stopping the memory ballooning daemon:                     [  OK  ]
Stopping perfmon:                                          [  OK  ]
Stopping the xenopsd daemon:                               [  OK  ]
Stopping the XCP RRDD daemon:                              [  OK  ]
Stopping the XCP networking daemon:                        [  OK  ]
Stopping the fork/exec daemon:                             [  OK  ]
Starting the fork/exec daemon:                             [  OK  ]
Starting the XCP networking daemon: .                      [  OK  ]
Starting the XCP RRDD daemon:                              [  OK  ]
Starting the xenopsd daemon:                               [  OK  ]
Starting perfmon:                                          [  OK  ]
Starting the memory ballooning daemon:                     [  OK  ]
Starting the v6 licensing daemon:                          [  OK  ]
Starting xapi: OK                                          [  OK  ]

After I ran this, the status in the GUI changed to still running. Then I proceeded to forcefully shutdown the host which did work. Doing it this way you might end of loosing data, but at least you get the vm back :)


Tuesday, June 4, 2013

[how to]:: puppet master setup (basic setup)

Now lets setup a initial puppet master server. This entry will only show a basic setup to get you started. I'll add more advanced tutorials after this :)

Let's get started!

The initial file structure under /etc/puppet

[root@puppetmaster puppet]# ls -l
-rw-r--r--  1 root root 4133 May 22 19:29 auth.conf
-rw-r--r--  1 root root 1462 May 22 19:27 fileserver.conf
drwxr-xr-x  2 root root 4096 May 22 19:29 manifests
drwxr-xr-x. 2 root root 4096 May 22 19:29 modules
-rw-r--r--  1 root root  853 Jun  4 09:40 puppet.conf

Now lets start the puppet master, and confirm that it's running.
[root@puppetmaster test]# /etc/init.d/puppetmaster start
Starting puppetmaster:                                     [  OK  ]
[root@puppetmaster test]# ps aux | grep puppet
puppet   15012  0.4  2.2 141272 42484 ?        Ssl  12:58   0:00 /usr/bin/ruby /usr/bin/puppet master
root     15017  0.0  0.0 103244   828 pts/0    S+   12:58   0:00 grep puppet

For this tutorial we will run the server and agent on the same server. Next, we will be invoking the puppet agent manually with the following command,
[root@puppetmaster test]# puppet agent --test --server
Info: Retrieving plugin
Info: Caching catalog for
Info: Applying configuration version '1370343823'
Notice: Finished catalog run in 0.07 seconds

From this you can see that nothing has been applied to this server. We need to setup the manifests and create a test module to see any action.

Now we will focus on the manifests and modules folders. The rest of the configs can be left at default configurations.

The Manifests

Manifests contains the site, included modules and node (clients) configurations. These configurations can be located in one file or be split up into multiple files. The approach that we are going to follow is having the files split. This way the directory structure looks neater and cleaner.

Under the manifest directory the following files must be created,


# First file read, contains site configuration for puppet master

#import modules and nodes 
import "modules"
import "nodes"

filebucket { main: server => puppetmaster}

#global defaults

File {backup => main}
Exec {path => "/usr/bin:/usr/sbin:/bin:/sbin"}

This file shows the imports of the bottom two files, modules and nodes, plus it sets defaults for backing up of files and finally sets the default path environment for when we want to execute linux commands from puppet.


#import test modules
import "test"

In this file you import modules that has been created under the modules folder, we will create the test module a little bit later.


# This file contains all the configurations for nodes
# Future node definition.

This file is used to add modules to the servers that we need to specify later. We will leave this blank for now. Lets continue to create our first puppet module.

The Modules

Firstly we need to create the module structure. We can do it manually or have puppet create the structure for us. In this example we will create the necessary directories.

[root@puppetmaster modules]# mkdir -p test/{manifests,templates,files}
[root@puppetmaster modules]# ls -l test/
total 12
drwxr-xr-x 2 root root 4096 Jun  4 12:08 files
drwxr-xr-x 2 root root 4096 Jun  4 12:08 manifests
drwxr-xr-x 2 root root 4096 Jun  4 12:08 templates

The files directory contains files associated with this module, if can be static configuration files, scripts that you need to have with your module, anything that you module requires to work. For our example we will add the script called

This module will not work without a manifest, the manifest tells puppet what to do. So in our manifests we will have puppet deploy the and execute it for us.


class test {

  owner => root,
  group => root,
  mode => 755,
  source => "puppet:///modules/test/",
  ensure => file;
  cwd => "/tmp",

 # this "->" means that exec must only run if the file exist. the above can also be written as 
 # owner => root,
 # group => root,
 # mode => 755,
 # source => "puppet:///modules/test/",
 # ensure => file;
 # cwd => "/tmp",
 # requires => File["/tmp/"]
 # test both. This is just a small introduction into inheritance in puppet.

 $hn = $fqdn
 $ip = $ipaddress_eth0
 $virtual = $is_virtual
 $dist = $lsbdistdescription

  owner => nobody,
  group => root,
  mode => 644,
  content => template("test/info_txt.erb"),

Last folder called templates will be used to keep configuration templates. Sometimes you want to have different configurations depending on the server or setup. A good example of template usage is when you have to manage many virtual hosts under apache or nginx. We will visit this at a later stage.

For now lets create a small template under templates that will generate a file containing the server name and ip address. Get the file here info_txt.erb
Templates are called from the manifests. Review init.pp and look for a file directive called /tmp/info.txt on how to invoke a template.

So finally we have a test module. The directory structure should finally look like this,
[root@host modules]# ls -l */*
total 4
-rw-r--r-- 1 root root 83 Jun  4 12:49

total 4
-rw-r--r-- 1 root root 772 Jun  4 12:49 init.pp

total 4
-rw-r--r-- 1 root root 75 Jun  4 12:49 info_txt.erb

At this point if you run puppet agent again, nothing will be applied to the current host. We now need to setup the node.pp to tell puppet that this nodes needs the test module applied to it. Let's proceed!


# This file contains all the configurations for nodes
node "" {
 class{test:} # apply the test module to this node

Now lets run the puppet agent manually again, and test the results.

[root@puppetmaster tmp]# puppet agent --test --server
Info: Retrieving plugin
Info: Caching catalog for
Info: Applying configuration version '1370345145'
Notice: /Stage[main]/Test/File[/tmp/]/ensure: defined content as '{md5}1de0c2752e957cafb6380ee53240850b'
Notice: /Stage[main]/Test/Exec[/tmp/]/returns: executed successfully
Notice: /Stage[main]/Test/File[/tmp/info.txt]/ensure: defined content as '{md5}8e344aa44044ed692b510f5f23c672f3'
Notice: Finished catalog run in 0.30 seconds
[root@puppetmaster tmp]# cat info.txt 
#Deployed by puppet

fqdn is
my ip is

I am a Virtual Machine

Linux Distro is CentOS release 6.4 (Final)
[root@puppetmaster tmp]# ls -l 
total 12
-rw-r--r-- 1 nobody root 159 Jun  4 13:31 info.txt
-rwxr-xr-x 1 root   root  83 Jun  4 13:31
[root@puppetmaster tmp]#

From the top, the notice entries shows you that the script has been deployed and successfully executed. Next template file was created with the content that we defined. Now we test that the template added the correct information by catting the file, and the final ls -l shows the files that's been created.

And that's it for the initial setup of a puppet master. Remember that the puppet master and client is on the same server. Next time I will show you how to add a puppet client to a puppet master.


Monday, June 3, 2013

[how to]:: Install puppet

Ok so my first puppet post was a little advanced, so I thought maybe I should start with the basics of puppet. Puppet starts getting pretty awesome as soon as you need to manage more than 5 servers and over 60 virtual hosts, spread across various servers. But before we proceed, lets install puppet and run our first puppet script!


Assuming you have CentOS, you'll need to add the correct puppetlabs yum repositories. This can be done by using the following command,

# rpm -ivh

Once installed, you need to use yum to install puppet and puppet server.

# yum install puppet puppet-server

Confirm that puppet is installed by running,
# puppet --version

Lets start with create a basic puppet script to create a file called test.pp and add the following text to it,

## test.pp
file {"/tmp/test":
 ensure => file,
 owner => root,
 group => root,
 content => "hello im a file",
 mode => 644,

To apply this script you will need to run puppet with the following command,

# puppet apply test.pp 
Notice: /Stage[main]//File[/tmp/test]/ensure: defined content as '{md5}e32d2cadd86f222aa80e3fd11d22d0cd'
Notice: Finished catalog run in 0.08 seconds

Great, from the Notice you can see that a file was created under /tmp/test. You can cat the file to confirm the content "hello im a file".

Next post we will be moving this test.pp to the puppet master, add a server to puppet master and then create the /tmp/test file on the client server.



Production Class Puppet Master Server


Running puppetmaster with the built in webserver from Ruby is not exactly scalable and puppet will suffer from performance issues. After researching, I've come across the following solution,
  • nginx, with ruby passenger.
More robust and can scale easily to a couple of hundred hosts. I have not personally run this on a environment of more that 40 servers. So lets start with the setup.


  1. Using latest Centos 6.4
  2. Puppet repo added to yum (# rpm -ivh
  3. Remove any nginx installations (# yum remove nginx)
Once your server virtual machine is updated, lets get ready for the installation.


  1. # yum install -y ruby rubygems ruby-devel.x86_64 puppet puppet-server gcc make pcre-devel zlib-devel openssl-devel pam-devel curl-devel gcc-c++
  2. # gem install rack passenger
  3. # /usr/lib/ruby/gems/1.8/gems/passenger-4.0.5/bin/passenger-install-nginx-module 
    • Follow the onscreen instructions. This script will install the passenger module, as well as getting the latest nginx and compiling it under /opt/nginx.
    • This will take a while :)
  4. A couple of links are required
    • # ln -s /opt/nginx/conf /etc/nginx
    • # ln -s /opt/nginx/logs /var/log/nginx
  5. Lets install the start up script and sysconfig,
    • # curl -L -o /etc/init.d/nginx; chmod +x /etc/init.d/nginx
    • # curl -L -o /etc/sysconfig/nginx
  6. nginx configuration setup
    • # mkdir -p /etc/nginx/conf.d
    • We will be placing the puppet master virtual host under the conf.d directory. This keeps things neat and tidy!
    • # curl -L -o /etc/nginx/nginx.conf
    • # curl -L /etc/nginx/conf.d/puppet.conf
    • Edit the puppet.conf file and replace with your FQDN servername!
  7. Now we run puppet in master mode to create all the certificates required.
    • # puppet master --no-daemonize --verbose
    • Once all certs are created press CTRL+c to quit.
    • View that the certs where created, # ls -l /var/lib/puppet/ssl/*/*
  8. Next we setup the Rack.
    • # mkdir -p /etc/puppet/rack/public
    • # curl -L -o /etc/puppet/rack/
    • # chown -R puppet:puppet /etc/puppet/rack
  9. Add nginx to startup,
    • # chkconfig --add nginx; chkconfig nginx on; chkconfig puppetmaster off
If everything went to plan you should be running puppet master thru nginx and passenger on port 8140 (ssl).

Please ensure that all the setting are correct on the files.

Feel free to comment :)




I want to know who acknowledged a nagios alert I've just received. 

Here's how,

Copy this script to your nagios plugins directory,

Then make sure you specify this file under your nagios configuration, under the commands file or section of you configuration (depends on how your nagios is setup)

define command{
        command_name    notify-by-email

Then under your contacts set the service_notification_commands to notify-by-email as example below,

define contact{
use generic-contact
name email-only
service_notification_commands   notify-by-email
~ truncated ~

Enjoy! Remember this is just a quick overview of how to. If you have any question regarding your nagios installation, then please ask.


CentOS:: Resizing LVM (Physical / Virtual)

A bit of background. XenServer and VM templates for easy deploying of virtual machines. We have a standard disk configuration of the following,

  • Disk /dev/xvda: 12.9 GB in two partitions
    • /dev/xvda1 ext4 /boot (500MB)
    • /dev/xvda2 the rest used as a physical volume (PV) for the LVM volume group called /dev/mapper/vg_centos63base-lv_root.

Action Plan:

I need to have a xxGB partition because of some developer that needs more disk space to accomplish his/her task. There are two ways of doing this. Create a extra xxGB disk and attach it the vm under new directory/partition or you can extend the current LVM with the attached disk (What happens when you lose the file you extended you disk with?). Both ideas seems boring and particularly not efficient, especially if you have a couple of vms. 

The approach I’m following does take a fair bit more time, but I regard it as a elegant solution. Also I would recommend you to take snapshots before the resize and also assign downtime as you need to reboot the vm a couple of times.

So here’s the action list,

  1. Run # fdisk -l and verify the size of your /dev/xvda
  2. Shutdown VM.
  3. Increase the virtual harddrive, please follow each vendors specific approach.
  4. Power on the VM.
  5. Run # fdisk -l and verify that the size of the /dev/xvda device did increase to your specified size, in my case I increase disk from 12GB to 16GB.
  6. Run # fdisk /dev/xvda
    1. under fdisk press ‘p’ – this will display your current layout
    2. We need to delete the 2nd partition and recreate it with the correct settings.
    3. press ‘d’ and enter ’2′ — You have now deleted the partition. We need to create it now.
    4. press ‘n’ and enter ‘p’ then ’2′. You have created a 2nd partition.
    5. press ‘p’ and you’ll see the partition but something is incorrect, our partition was a Linux LVM.
    6. press ‘t’ and enter ’2′ then enter ’8e’. This is HEX code for the Linux LVM system type.
    7. press ‘p’ and you’ll see the partition system type is now corrected.
    8. press ‘w’
  7. Now reboot the VM. The kernel needs to re-read the partition table of /dev/xvda
  8. Run # pvdisplay will show the current Physical Volume Size.
  9. Run # pvresize /dev/xvda2, then run # pvdisplay to display the extra storage available.
  10. # vgdisplay  - will show you the exact amount of “Free  PE / Size”
  11. Now we can just increase our root logical volume #
    • lvextend -L Size /dev/mapper/VG 
    • OR
    • lvextend -l +100%FREE /dev/mapper/VG
    • In my case I used,
      •  # lvextend -l +100%FREE /dev/mapper/vg_centos63base-lv_root
  12. We Need to resize the ext3/4 filesystem,
    • # resize2fs /dev/mapper/vg_centos63base-lv_root
  13. # df -h - Now you will see the extra diskspace!
Finally do a reboot to confirm. 

Have fun. (This is a updated entry from my old blog entry on wordpress.)