Rsync snapshots / rolling backups

Today I had the task of updating my backup system, for a long time I didn’t bother rolling backups because, well I just didn’t care. However I now wanted to set something up. I remembered long ago that I had done this before, based on the excellent, albeit dated write up from Mike Rubel. After some hacking I rewrote his script to make it a bit more useful in my environment. The script now accepts a single argument, which points to a configuration file. To download the initial version of this tool I called ‘snapshots’ click on the link below.


Perl link extractor

Below is a simple script I use almost daily which I thought I would share with you all. The script fetches various URL’s then extracts all hyper links from the fetched data. It cleans them up a bit and prints the resulting data as standard output

use strict;
use HTML::LinkExtractor;
use LWP::Simple;

        print "Usage:
        $0 URL URL URL ...

# Fetch and parse the link
for my $link (@ARGV){
        my $LX = new HTML::LinkExtractor;
        my $page = get($link);

        # figure out URL base.
        my $base;
        if($link =~ /^(https?:\/{2}[^\/]+)\/?/i){
                $base = $1;

        if($link !~ /\/$/){
                my @link = split(/\//, $link);
                pop @link;
                $link = join('/', @link);

        for(@{ $LX->links } ){
                if(lc $_->{tag} eq 'a'){
                        my $url;
                        if($_->{href} =~ /^\//){
                                $url = $base . $_->{href};
                        } else {
                                $url = $link . '/' . $_->{href};

                        print qq{$url\n};

I call it linkext. Below is an example usage to fetch all of the links available on the Intel Lustre download page:

$ linkext;O=D;O=A;O=A;O=A

So then, to have this a bit more useful lets parse it with egrep and pass the arguments to xargs, which executes wget to fetch our files:

linkext | egrep "\.rpm$" | xargs wget

Which would start downloading the various files. Or you can of course have xargs execute echo and display the full command line in which wget is to work on:

$ linkext | egrep "\.rpm$" | xargs echo wget

I hope you all find this useful.

Parse smb.conf based on an NFS mapping directory

So I’m not sure how others run their home networks but in my case I have a single large NFS NAS. This NAS hosts both Samba and NFS shares as well as a web portal.

After many years of trying different configurations to my primary data stores I’ve finally come up with a simple method which is shared across all clients. The way this is done is to export a directory called ‘mapping’, in which it maps back to the mounts and sub directories containing my various Music, Pictures, Software, categories. These are mapped via symbolic links. An example of this would be:

Music -> /nfs/hal/data1/Music

Well this works great, as all of the clients accessing the data (including the web portal) all mount my various arrays (data0 – data4) this way. This also works out well if I have problems and want to shift data around, I can quickly remove the Music link and say point it to /nfs/hal/data4/Music.

Now onto the Samba part, even though samba seemingly has zero problems following symbolic links, you can’t point a share directly to one. I have no idea why and really don’t want to try and get into the mess that is samba debugging. So the quick solution there for me is to generate a smb.conf template file. It’s basically my full blown smb.conf file except all path references are replaced with place holders. I.e.:

comment = Music
writable = no
browseable = yes
guest ok = yes
create mask = 0777
path = %Music%

This why I can automate the replacement of paths based on my mapping directory. Below is the simple script I’m using to do this:


use strict;

our $mapping = '/exports/mapping';
our $template = '/etc/samba/smb.conf.template';

my %list;
if(opendir(my $dh, $mapping)){
        for(readdir $dh){
                if($_ eq '.' || $_ eq '..'){
                } else {
                        my $path = readlink($mapping . '/' . $_);
                                $list{ $_ } = $path;

} else {
        die "Unable to read directory: $mapping $!\n";

if(open(my $fh, $template)){
        my $data;
        { local $/; $data = <$fh>; };

        for(keys %list){
                $data =~ s/\%$_\%/$list{$_}/g;

        print $data;


ffmpeg / avconv image to video file…

So while playing around with one of the local ski resorts website I decided, hey wouldn’t it be neat to time lapse the whole day by grabbing a frame a second from the public web cam. After collected my images I decided to convert them into a single video stream with ffmpeg. I thought I would be clever and just use the place holder option %d in the input file name and go from there. Well this didn’t work with a very ambiguous error

$ avconv -f image2 -i %d.jpeg -r 24 -s 640x480 foo.avi
avconv version 0.8.5-4:0.8.5-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers
  built on Jan 24 2013 18:01:36 with gcc 4.6.3
[image2 @ 0xdf27c0] max_analyze_duration reached
Input #0, image2, from '%d.jpeg':
  Duration: 00:02:32.64, start: 0.000000, bitrate: N/A
    Stream #0.0: Video: mjpeg, yuvj422p, 640x480, 25 fps, 25 tbr, 25 tbn, 25 tbc
No such file or directory: %d.jpeg

Well that’s frustrating. The directory of images was based on epoch time (seconds since 1970). So each image contained the second it was captured as the file name. After some meddling around I discovered that ffmpeg / avconv isn’t so smart and requires that you start your image sequence at 0 and increment upward 1 by 1 for all images you wish to encode. A quick fix was to just rename the images to do this and then the video encoding completed successfully.