New Perl module: Filesys::Virtual::Chroot

59560038

I’ve been trying hard lately to take useful code I’ve written over the years for different projects (such as my predictive anti-spam system Ruckus scanmail) and rep.ublish the libraries with more generic names under CPAN

Filesys::Virtual::Chroot provides advisory functions for creating a virtual chroot environment. This is useful when you wish to lock a process which takes input from the wild into a set of directories.

This library can be downloaded here directly: Filesys-Virtual-Chroot-1.3.tar

Or pulled off of CPAN with:

sudo cpan Filesys::Virtual::Chroot

Command.

Another year, another LUG.

Well it’s been another year, and another Lustre User Group meeting. There were many interesting discussions which took place. Though in general I found the most useful session to be after the LUG was completed. The developer meeting, for me at least was an excellent use of my time.

20150414_172356

 

PANO_20150413_135544

Got to hang out with some colleagues and old friends20150414_214137

 

Quite a few `primary’ developers at the developer meeting.PANO_20150416_144330

Multiple claimed blocks errors

So your file system has crashed… Upon running e2fsck against it you’re now seeing error such as:

Pass 1B: Rescanning for multiply-claimed blocks Multiply-claimed block(s) in inode 4195619:
167904376 167904377 167904378 167904379 167904380 167904381 167904382 167904383 167904384
167904385 167904386 167949296 167949297 167949298 167949299 167949300 167949301 167949302
167949303 167949304 167949305 167949306 

What does this all mean?

Well unfortunately for you this means that your file system has incurred major damage.

What’s happened is the inode structures around this part of the file system have been damaged to the extent that multiple inodes are claiming to own the same data blocks.

There is no way to really fix this without rolling back the file system or restoring from backup.

The inode (in the case above) which was damaged was 4195619. Some other inodes have already layed claim to the blocks listed above and now this inode is trying to also lay claim to them. Multiple inodes cannot claim the same blocks (unless of course we’re talking about special inodes such as hard links).

Sadly the inodes which were discovered before inode 4195619 could potentially be damaged and this inode could be fine. The only way to know for sure would be to dump the inode and it’s associated blocks out to a file, then verify that file based on a previous checksum you may have taken.

Determine existing journal size on an Internal journal for EXT3 / EXT4 / LDISKFS file systems

Just a quick note about figuring out existing internal journal sizes for EXT3 / EXT4 and LDISKFS file systems:

First thing to do is determine the current inode block, this is usually inode 8, however this may change depending on the file system, so it’s worth checking for sure:

# tune2fs -l /dev/sdXY | grep -i "journal inode"

This command returns the inode at which the journal resides

Journal inode: 8

Next you’ll need to probe that inode directly with the debugfs tool. This can be done with the following commands

debugfs /dev/sdXY
debugfs 1.42.9 (4-Feb-2014)
debugfs: stat <8>

Which results in the following output:

Inode: 8   Type: regular    Mode:  0600   Flags: 0x80000
Generation: 0    Version: 0x00000000:00000000
User:     0   Group:     0   Size: 134217728
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 262144
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x54278f31:00000000 -- Sat Sep 27 22:31:45 2014
 atime: 0x54278f31:00000000 -- Sat Sep 27 22:31:45 2014
 mtime: 0x54278f31:00000000 -- Sat Sep 27 22:31:45 2014
crtime: 0x54278f31:00000000 -- Sat Sep 27 22:31:45 2014
Size of extra inode fields: 28
EXTENTS:
(0-32766):2655233-2687999, (32767):2688000

What we’re interested in here, is the Blockcount

Links: 1   Blockcount: 262144

So in this case we have 262144, 4096 byte blocks, which equals 128 megabytes

Also, on some newer versions of debugfs, the tool will calculate the size of the inode based on block count. This calculation can be seen in the Size: field

User:     0   Group:     0   Size: 134217728

Rsync snapshots / rolling backups

Today I had the task of updating my backup system, for a long time I didn’t bother rolling backups because, well I just didn’t care. However I now wanted to set something up. I remembered long ago that I had done this before, based on the excellent, albeit dated write up from Mike Rubel. After some hacking I rewrote his script to make it a bit more useful in my environment. The script now accepts a single argument, which points to a configuration file. To download the initial version of this tool I called ‘snapshots’ click on the link below.

snapshots.tar.gz

ZPcheck version 1.4 released

Hi All,

Today I’m releasing ZPcheck version 1.4, which is the initial public release. It’s a simple tool to regularly check and report on ZFS pools. Included in the reports generated by ZPcheck are smartd logging (if available) from your system logger. This is useful when you’re dealing with problematic pools and providers. Any ways the tool is released under the GPL 2.0 and is free for all to use. It requires Perl 5.10 or later and some Perl modules, all of which can be installed easily through CPAN.

Click here to downloda zpcheck-1.4.tar.gz

Parse smb.conf based on an NFS mapping directory

So I’m not sure how others run their home networks but in my case I have a single large NFS NAS. This NAS hosts both Samba and NFS shares as well as a web portal.

After many years of trying different configurations to my primary data stores I’ve finally come up with a simple method which is shared across all clients. The way this is done is to export a directory called ‘mapping’, in which it maps back to the mounts and sub directories containing my various Music, Pictures, Software, categories. These are mapped via symbolic links. An example of this would be:

Music -> /nfs/hal/data1/Music

Well this works great, as all of the clients accessing the data (including the web portal) all mount my various arrays (data0 – data4) this way. This also works out well if I have problems and want to shift data around, I can quickly remove the Music link and say point it to /nfs/hal/data4/Music.

Now onto the Samba part, even though samba seemingly has zero problems following symbolic links, you can’t point a share directly to one. I have no idea why and really don’t want to try and get into the mess that is samba debugging. So the quick solution there for me is to generate a smb.conf template file. It’s basically my full blown smb.conf file except all path references are replaced with place holders. I.e.:

[Music]
comment = Music
writable = no
browseable = yes
guest ok = yes
create mask = 0777
path = %Music%

This why I can automate the replacement of paths based on my mapping directory. Below is the simple script I’m using to do this:

#!/usr/bin/perl

use strict;

our $mapping = '/exports/mapping';
our $template = '/etc/samba/smb.conf.template';

my %list;
if(opendir(my $dh, $mapping)){
        for(readdir $dh){
                if($_ eq '.' || $_ eq '..'){
                        next;
                } else {
                        my $path = readlink($mapping . '/' . $_);
                        if($path){
                                $list{ $_ } = $path;
                        }
                }
        }

        closedir($dh);
} else {
        die "Unable to read directory: $mapping $!\n";
}

if(open(my $fh, $template)){
        my $data;
        { local $/; $data = <$fh>; };

        for(keys %list){
                $data =~ s/\%$_\%/$list{$_}/g;
        }

        print $data;
}

exit;

Simple script to alert you on ZFS pool problems

So after looking around this morning for a simple file system script to alert me when my ZFS pools have problems on my new file server I found a few, the problem was none worked correctly. Strangely these scripts were extremely simple so I was a bit surprised any hand problems. Because of this I took five minutes to knock together a portable perl script to email you in the event of problems:

#!/usr/bin/perl

our @users = qw(root);
our $zpool = "/sbin/zpool";
use strict;
use Sys::Hostname;
use IPC::Open3;

our $stat;
chomp($stat=`zpool status -x | head -1 | awk '{print \$4}'`);

if($stat ne 'healthy'){
 my $data = ("Date: " . scalar(localtime()) . "\n") .
 ("Hostname: " . hostname()) .
 "\n" .
 `$zpool status -x` .
 "\n";

 my $pid = open3(\*CHLDIN, \*CHLDOUT, \*CHLDERR,
 "mail", "-s", "ZFS Pool errors detected: " . hostname(), @users);

 print CHLDIN $data;
}

Cyanogenmod CM11 (Kitkat) and USB Mass Storage mode

cyanogenmod logo

So after some messing around I discovered that with Cyanogenmod CM11 (KitKat) release that USB Mass Storage mode wasn’t working, so after some messing around here is what I’ve found.

If you’re like me, you enabled developer mode:

  1. Go to the settings menu, and scroll down to “About phone.” Tap it.
  2. Scroll down to the bottom again, where you see “Build number.” (Your build number may vary from ours here.)
  3. Tap it seven (7) times. After the third tap, you’ll see a playful dialog that says you’re four taps away from being a developer. Continue tapping 4 more times and developer mode will appear in your main settings menu.

And probably turned on Android debugging for other applications which might want it (Titanium backup in my case).

Even though the USB Mass Storage mode option is present in the USB connection type menu, when clicked nothing seems to happen.. So after some messing around I discovered that, apparently, UMS (USB Mass Storage mode) is only available to users who disable Android debugging (who knew?). This is a *new* feature apparently as I’ve never seen this problem in earlier Android releases.

After disabling Android debugging, the dialog menu popped right up asking me if I wanted to “Turn on USB Storage”. This worked great, allowed me access to my files and mount the SD cards directly….

That all said, as soon as I turned off USB Mass Storage mode my phone crashed and reset it self, so be careful 🙂