Jun 02

CORS Example with PHP

Mostly serving web pages you have the capability to do dynamic scripting since it is done on the same server. However if you do need to do this CORS allows the mechanism. Explained well at this link: http://enable-cors.org/

"JavaScript and the web programming has grown by leaps and bounds over the years, but the same-origin policy still remains. This prevents JavaScript from making requests across domain boundaries, and has spawned various hacks for making cross-domain requests.

CORS introduces a standard mechanism that can be used by all browsers for implementing cross-domain requests. The spec defines a set of headers that allow the browser and server to communicate about which requests are (and are not) allowed. CORS continues the spirit of the open web by bringing API access to all."

In my case I wanted to proof this because I have a http enabled S3 bucket on Amazon AWS. Meaning all static. So one example of a need for this would be a contact form or quick lookup in a database etc. My example below is very simple. S3 website has a submit form button that use javascript to send the request to a PHP enable nginx server. PHP script look up the JSON value passed to it in an array and pass the value back.

S3 static web page:

<html>

<head>
<script type="text/javascript">

//window.onload = doAjax();

function doAjax(last_name) {
    var url         = "http://my.phpserver.domain/api/cors_contact.php";
    
    //var request     = JSON.stringify({searchterm:"doe"});
    //var search_st  = 'searchterm:' + 'last_name';
    //alert(search_st);
    var request     = JSON.stringify({searchterm:last_name});
    //alert (request);
    var xmlhttp     = new XMLHttpRequest();

    xmlhttp.open("POST", url);
    xmlhttp.setRequestHeader("Content-Type", "application/json; charset=UTF-8");
    xmlhttp.setRequestHeader("Access-Control-Allow-Origin", "*");
    xmlhttp.setRequestHeader("Access-Control-Allow-Methods", "GET, POST, OPTIONS");
    xmlhttp.setRequestHeader("Access-Control-Allow-Headers", "Content-Type");
    xmlhttp.setRequestHeader("Access-Control-Request-Headers", "X-Requested-With, accept, content-type");

    xmlhttp.onreadystatechange = function() {
        if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
            var jsondata = JSON.parse(xmlhttp.responseText);
            //document.getElementById("id01").innerHTML = xmlhttp.responseText;
            document.getElementById("id02").innerHTML = jsondata.word;
        }
    };

    xmlhttp.send(request);
}

function searchKeyPress(e)
{
    // look for window.event in case event isn't passed in
    e = e || window.event;
    if (e.keyCode == 13)
    {
        document.getElementById('Submit').click();
        return false;
    }
    return true;
}

</script>
</head>
<body>
<div id="id01"></div>
<div id="id02"></div>
<form name="contactform" id="controlsToInvoke" action="">
<table width="450px">
<tr>
<td valign="top"">
  <label for="last_name">Last Name *</label>
 </td>
<td valign="top">
  <input type="text" id="last_name" name="last_name" maxlength="50" size="30" onkeypress="return searchKeyPress(event);" />
 </td>
</tr>
<tr>
<td colspan="2" style="text-align:center">
  <input type="button" id="Submit" value="Submit" onclick="doAjax(document.getElementById('last_name').value)" />
 </td>
</tr>
</table>
</form>
</body>
</html>

PHP script.

In this case you can see all the headers are handled inside the PHP script. You can of course handle it with the web server also. Details for web servers here: http://enable-cors.org/server.html

<?php $dictionary = array('doe' => 'j.doe@email.domain', 'smith' => 'asmith@email2.domain', 'stranger' => 'noname@noname.com');

if ($_SERVER['REQUEST_METHOD'] == 'OPTIONS') {
    if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD']) && $_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD'] == 'POST') {
        header('Access-Control-Allow-Origin: *');
        header('Access-Control-Allow-Headers: X-Requested-With, content-type, access-control-allow-origin, access-control-allow-methods, access-control-allow-headers');
    }
    exit;
}

$json = file_get_contents('php://input');
$obj = json_decode($json);

if (array_key_exists($obj->searchterm, $dictionary)) {
    $response = json_encode(array('result' => 1, 'word' => $dictionary[$obj->searchterm]));
}
else {
    $response = json_encode(array('result' => 0, 'word' => 'Not Found'));
}

header('Content-type: application/json');
header('Access-Control-Allow-Origin: *');
echo $response;

Thanks to this excellent reference. There are a lot of very confusing discussions on this topic. Especially around pre-flight checking.
http://www.mjhall.org/php-cross-origin-resource-sharing/

Comments Off on CORS Example with PHP
comments

May 09

Migrating Ubuntu On a ZFS Root File System

I have written a couple articles about this here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-15-04/ and here http://blog.ls-al.com/ubuntu-on-a-zfs-root-file-system-for-ubuntu-14-04/

This is a quick update. After using virtualbox to export and import on a new machine my guest did not boot up all the way. I suspect I was just not seeing the message about manual/skip check of a file system and that the fstab entry for sda1 changed. Here is what I did. On bootup try "S" for skip if you are stuck. In my case I was stuck after a message about enabling encryption devices or something to that effect.

Check fstab and note disk device name.

root@ubuntu:~# cat /etc/fstab
/dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Check if this device exists.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*
ls: cannot access /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0*: No such file or directory

What is the correct device name.

root@ubuntu:~# ls -l /dev/disk/by-id/ata-VBOX_HARDDISK*                    
lrwxrwxrwx 1 root root  9 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528 -> ../../sda
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 May  9 15:38 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part2 -> ../../sda2

Keep old fstab and update with correct name.

root@ubuntu:~# cp /etc/fstab /root

root@ubuntu:~# vi /etc/fstab
root@ubuntu:~# sync
root@ubuntu:~# diff /etc/fstab /root/fstab 
1c1
< /dev/disk/by-id/ata-VBOX_HARDDISK_VBb0249023-5afef528-part1 /boot/grub auto defaults 0 1 
---
> /dev/disk/by-id/ata-VBOX_HARDDISK_VB7e932a52-ef3c41b0-part1 /boot/grub auto defaults 0 1 

Try rebooting now.

Comments Off on Migrating Ubuntu On a ZFS Root File System
comments

Apr 20

Ubuntu ZFS replication

Most of you will know that Ubuntu 16.04 will have ZFS merged into the kernel. Despite licensing arguments I see this as a positive move. I recently tested btrfs replication (http://blog.ls-al.com/btrfs-replication/) but being a long time Solaris admin and understanding how easy ZFS makes things I welcome this development. Here is a quick test of ZFS replication between two Ubuntu 16.04 hosts.

Install zfs utils on both hosts.

# apt-get install zfsutils-linux

Quick and dirty create zpools using an image just for the test.

root@u1604b1-m1:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m1:~# zpool create tank1 /tank1.img 
root@u1604b1-m1:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    50K  1008M         -     0%     0%  1.00x  ONLINE  -

root@u1604b1-m2:~# dd if=/dev/zero of=/tank1.img bs=1G count=1 &> /dev/null
root@u1604b1-m2:~# zpool create tank1 /tank1.img
root@u1604b1-m2:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank1  1008M    64K  1008M         -     0%     0%  1.00x  ONLINE  -
root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

Copy a file into the source file system.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/W.pdf /tank1/
root@u1604b1-m1:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Take a snapshot.

root@u1604b1-m1:~# zfs snapshot tank1@snapshot1
root@u1604b1-m1:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -

Verify empty target

root@u1604b1-m2:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank1    55K   976M    19K  /tank1

root@u1604b1-m2:~# zfs list -t snapshot
no datasets available

Send initial

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 
cannot receive new filesystem stream: destination 'tank1' exists
must specify -F to overwrite it
warning: cannot send 'tank1@snapshot1': Broken pipe

root@u1604b1-m1:~# zfs send tank1@snapshot1 | ssh root@192.168.2.29 zfs recv -F tank1
root@192.168.2.29's password: 

Check target.

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1      0      -  11.2M  -
root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 12M Apr 20 19:22 W.pdf

Lets populate one more file and take a new snapshot.

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/S.pdf /tank1
root@u1604b1-m1:~# zfs snapshot tank1@snapshot2

Incremental send

root@u1604b1-m1:~# zfs send -i tank1@snapshot1 tank1@snapshot2 | ssh root@192.168.2.29 zfs recv tank1
root@192.168.2.29's password: 

Check target

root@u1604b1-m2:~# ls -lh /tank1
total 12M
-rwxr-x--- 1 root root 375K Apr 20 19:27 S.pdf
-rwxr-x--- 1 root root  12M Apr 20 19:22 W.pdf

root@u1604b1-m2:~# zfs list -t snapshot
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank1@snapshot1     9K      -  11.2M  -
tank1@snapshot2      0      -  11.5M  -

Comments Off on Ubuntu ZFS replication
comments

Apr 15

pfsense 2.3 upgrade on Alix

I have been running pfsense on an Alix tiny computer for a long time. Pretty much have not touched it for years apart from occasional firewall rule change and pfsense auto upgrades. Recently I wanted to upgrade to pfsense 2.3 and had nothing but trouble. I am still not sure if this is a problem with the Alix specs, bad compact flash, unclean power down or the pfsense upgrade procedure.

I document here what I found but note I ended up backing up the configuration, flashing the same 4G compact flash and restoring configuration. So far it is working for me with the caveat I am still looking why the web interface bombed a few times. I think it is because you need to disable the updates available auto check on the System Info page but not sure. This issue did not affect the firewall functions so not a big deal for me right now.

Symptom
The upgrade will run for a very long time after downloading and stating upgrade started. When I say long I mean long. Finally I would get to a point like below.

Apr 13 19:16:04	php: config.inc: New alert found: Something went wrong when trying to update the fstab entry. Aborting upgrade.

You can check the full upgrade log under Diagnostics -> NanoBSD. Also worth nothing my first upgrade left me in a not booting state. I fished out a null modem cable and on the serial console pressed some keys and after that I could get back to previous version.

For reference here are some log snippets:

[2.2.4-RELEASE][admin@fw.local.domain]
Broadcast Message from root@fw.local.domain                              
        (no tty) at 8:39 CDT...                                                
NanoBSD Firmware upgrade in progress...                                        
                                                                            
Installing /root/latest.tgz.                  
NanoBSD upgrade starting

[..]

Installing /root/latest.tgz.
SLICE         2
OLDSLICE      1
TOFLASH       ada0s2
COMPLETE_PATH ada0s2a
GLABEL_SLICE  pfSense1
Wed Apr 13 19:08:58 CDT 2016

[..]

dd if=/dev/zero of=/dev/ada0s2 bs=1m count=1
1+0 records in
1+0 records out
1048576 bytes transferred in 0.257870 secs (4066298 bytes/sec)

/usr/bin/gzip -dc /root/latest.tgz | /bin/dd of=/dev/ada0s2 obs=64k
1890945+0 records in
14773+1 records out
968163840 bytes transferred in 311.756151 secs (3105516 bytes/sec)
After upgrade fdisk/bsdlabel

This looks to me like things starting to go wrong should not be unclean file system.

/sbin/fsck_ufs -y /dev/ada0s2a
** /dev/ada0s2a
** Last Mounted on /builder/pfSense-230/tmp/pfSense/_.mnt
** Phase 1 - Check Blocks and Sizes

CANNOT READ BLK: 1485136
CONTINUE? yes

THE FOLLOWING DISK SECTORS COULD NOT BE READ: 1485136,
PARTIALLY TRUNCATED INODE I=93161
SALVAGE? yes

[..]

15850 files, 866146 used, 993212 free (1932 frags, 123910 blocks, 0.1% fragmentation)

***** FILE SYSTEM MARKED DIRTY *****

***** FILE SYSTEM WAS MODIFIED *****

***** PLEASE RERUN FSCK *****

/sbin/tunefs -L pfSense1 /dev/ada0s2a
Checking for post_upgrade_command...

[..]

fdisk: invalid fdisk partition table found
bsdlabel: /dev/ada0s3: no valid label found
bsdlabel: /dev/ada0s3: no valid label found
mount: /dev/ufs/pfSense1: R/W mount of /builder/pfSense-230/tmp/pfSense/_.mnt denied. Filesystem is not clean - run fsck.: Operation not permitted
cp: /tmp/pfSense1/etc/fstab: No such file or directory
sed: /tmp/pfSense1/etc/fstab: No such file or directory
umount: /tmp/pfSense1: not a file system root directory

fdisk/bsdlabel log:

Just a few references below.

Alix board specs here.
http://www.pcengines.ch/alix2d3.htm

For pfsense 2.3 it sounds like Alix bios need to be 99f.
https://doc.pfsense.org/index.php/ALIX_BIOS_Update_Procedure

This user describes a similar issue and re-image worked.
https://forum.pfsense.org/index.php?topic=71760.0

General.
https://doc.pfsense.org/index.php/Installing_pfSense
https://doc.pfsense.org/index.php/Writing_Disk_Images

Installing pfSense on a Compact Flash card


https://www.get-virtual.net/2014/09/16/build-firewall-appliance/

Comments Off on pfsense 2.3 upgrade on Alix
comments

Apr 12

Nagios on Linux for SPARC

I recently experimented a little with Linux for SPARC(more here https://oss.oracle.com/projects/linux-sparc/) and found it to be surprisingly stable. One of the environments I support is a pure OVM for SPARC environment and no luxury of Linux. So I am running some open source tools like Nagios, HAproxy etc on Solaris. Nagios has worked ok but is painful to compile. There are also some bugs that cause high utilization.

I tried a Linux for SPARC instance and since they are pretty much like RedHat/Oracle/CentOS it means a fair bit of packages already exist. Nagios does not exist so I compiled it. Suffice to say installing dependencies from YUM and compiling was a breeze compared to Solaris.

You can pretty much follow this doc to the letter:
https://assets.nagios.com/downloads/nagioscore/docs/Installing_Nagios_Core_From_Source.pdf

Things to note.
1. By default the firewall does not allow inbound http.

2. If you have permission issues in the web frontend or something like Internal server error you can disable(quick test) and then configure selinux for nagios scripts.

# setenforce 0
# chcon -R -t httpd_sys_content_t /usr/local/nagios

3. Redo plugins with openssl for https checks. I wanted to do https checks.

# yum install openssl-devel
# pwd
/usr/src/nagios/nagios-plugins-2.1.1

# ./configure --with-openssl --with-nagios-user=nagios --with-nagios-group=nagios
[..]
                    --with-openssl: yes
# make
# make install

# /usr/local/nagios/libexec/check_http -H 10.2.10.33 -S -p 215 
HTTP OK: HTTP/1.1 200 OK - 2113 bytes in 0.017 second response time |time=0.016925s;;;0.000000 size=2113B;;;0

I made a https command as follow.

command.cfg
# 'check_https' command definition
define command{
        command_name    check_https
        command_line    $USER1$/check_http -H $HOSTADDRESS$ -S -p $ARG1$
        }

And referenced as follow.

storage.cfg
define service{
        use                             remote-service         ; Name of service template to use
        host_name                       zfssa1
        service_description             HTTPS
        check_command                   check_https!215
        notifications_enabled           0
        }

Comments Off on Nagios on Linux for SPARC
comments

Mar 12

Btrfs Replication

Since btrfs has send and receive capabilities I took a look at it. The title is replication but if you are interested in enterprise level sophisticated storage level replication for disaster recover or better yet mature data set cloning for non production instances you will need to look further. For example the Oracle ZFS appliance has a mature replication engine built on send and receive but handles all the replication magic for you. I am not aware of commercial solutions built on btrfs that has the mature functionality the ZFS appliance can offer yet. Note we are not just talking replication but also snapshot cloning, sharing and protection of snapshots on the target end. So for now here is what I have tested for pure btrfs send and receive.

Some details on machine 1:

root@u1604b1-m1:~# more /etc/issue
Ubuntu Xenial Xerus (development branch) \n \l

root@u1604b1-m1:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
[..]
/dev/sda1       7.5G  3.9G  3.1G  56% /
/dev/sda1       7.5G  3.9G  3.1G  56% /home
[..]

root@u1604b1-m1:~# mount
[..]
/dev/sda1 on / type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)
/dev/sda1 on /home type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/@home)
[..]

root@u1604b1-m1:~# btrfs --version
btrfs-progs v4.4

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 47 top level 5 path @
ID 258 gen 47 top level 5 path @home

Test ssh to machine 2:

root@u1604b1-m1:~# ssh root@192.168.2.29 uptime
root@192.168.2.29's password: 
 10:33:23 up 5 min,  1 user,  load average: 0.22, 0.37, 0.19

Machine 2 subvolumes before we receive:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 40 top level 5 path @
ID 258 gen 40 top level 5 path @home

Create a subvolume, add a file and take a snapshot:

root@u1604b1-m1:~# btrfs subvolume create /tank1
Create subvolume '//tank1'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 53 top level 5 path @
ID 258 gen 50 top level 5 path @home
ID 264 gen 53 top level 257 path tank1

root@u1604b1-m1:~# ls /tank1

root@u1604b1-m1:~# touch /tank1/rr_test1
root@u1604b1-m1:~# ls -l /tank1/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume snapshot /tank1 /tank1_snapshot
Create a snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 63 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1
ID 265 gen 59 top level 257 path tank1_snapshot

Delete a snapshot:

root@u1604b1-m1:~# btrfs subvolume delete /tank1_snapshot
Delete subvolume (no-commit): '//tank1_snapshot'

root@u1604b1-m1:~# btrfs subvolume list /
ID 257 gen 64 top level 5 path @
ID 258 gen 58 top level 5 path @home
ID 264 gen 59 top level 257 path tank1

Take a read-only snapshot and send to machine 2:

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot

Create a readonly snapshot of '/tank1' in '//tank1_snapshot'

root@u1604b1-m1:~# btrfs send /tank1_snapshot | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot
root@192.168.2.29's password: 
At subvol tank1_snapshot

Machine 2 after receiving snapshot1:

[bash]
root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 61 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot

root@u1604b1-m2:~# ls -l /tank1_snapshot/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1

Create one more file:

root@u1604b1-m1:~# touch /tank1/rr_test2

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot2
Create a readonly snapshot of '/tank1' in '//tank1_snapshot2'

root@u1604b1-m1:~# btrfs send /tank1_snapshot2 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot2
root@192.168.2.29's password: 
At subvol tank1_snapshot2

Machine 2 after receiving snapshot 2:

root@u1604b1-m2:~# btrfs subvolume list /
ID 257 gen 65 top level 5 path @
ID 258 gen 60 top level 5 path @home
ID 264 gen 62 top level 257 path tank1_snapshot
ID 265 gen 66 top level 257 path tank1_snapshot2

root@u1604b1-m2:~# ls -l /tank1_snapshot2/
total 0
-rw-r--r-- 1 root root 0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root 0 Mar 10 10:53 rr_test2

Incremental send(adding -v for now to see more detail):

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot3
Create a readonly snapshot of '/tank1' in '//tank1_snapshot3'

root@u1604b1-m1:~# btrfs send -vp /tank1_snapshot2 /tank1_snapshot3 | ssh root@192.168.2.29 "btrfs receive /" 
At subvol /tank1_snapshot3
BTRFS_IOC_SEND returned 0
joining genl thread
root@192.168.2.29's password: 
At snapshot tank1_snapshot3

Using larger files to see effect of incremental better:

root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-amd64.iso /tank1/
root@u1604b1-m1:~# du -sh /tank1
1.1G	/tank1

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot6
Create a readonly snapshot of '/tank1' in '//tank1_snapshot6'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot5 /tank1_snapshot6 | ssh root@192.168.2.29 "btrfs receive -v /"  
At subvol /tank1_snapshot6
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot6 uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, ctransid=272 parent_uuid=ec3f1fb5-9bed-3e4c-9c5b-a6c586b10531, parent_ctransid=201
BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, stransid=272
At snapshot tank1_snapshot6

real	1m10.578s
user	0m0.696s
sys	0m16.064s

Machine 2 after snapshot6:

total 1.1G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
root@u1604b1-m1:~# cp /media/sf_E_DRIVE/ISO/ubuntu-gnome-15.10-desktop-i386.iso /tank1/

root@u1604b1-m1:~# btrfs subvolume snapshot -r /tank1 /tank1_snapshot7
Create a readonly snapshot of '/tank1' in '//tank1_snapshot7'

root@u1604b1-m1:~# time btrfs send -vp /tank1_snapshot6 /tank1_snapshot7 | ssh root@192.168.2.29 "btrfs receive -v /" 
At subvol /tank1_snapshot7
root@192.168.2.29's password: 
receiving snapshot tank1_snapshot7 uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, ctransid=276 parent_uuid=d38490b3-e6ee-3f41-b63d-460d11f8e757, parent_ctransid=272

BTRFS_IOC_SEND returned 0
joining genl thread
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=5c255311-0f60-4149-91f7-99d9d5acf64c, stransid=276
At snapshot tank1_snapshot7

real	1m17.393s
user	0m0.640s
sys	0m16.716s

Machine 2 after snapshot7:

[bash]
root@u1604b1-m2:~# ls -lh /tank1_snapshot7
total 2.0G
-rw-r--r-- 1 root root    0 Mar 10 10:38 rr_test1
-rw-r--r-- 1 root root   22 Mar 10 12:19 rr_test2
-rwxr-x--- 1 root root 1.1G Mar 10 13:04 ubuntu-gnome-15.10-desktop-amd64.iso
-rwxr-x--- 1 root root 1.1G Mar 10 13:07 ubuntu-gnome-15.10-desktop-i386.iso

Experiment: On the target I sent a snapshot to a new btrfs subvolume. So in effect become independent. This does not really help us with cloning since with large datasets it takes too long as well as duplicate the space which nullifies why we like COW.

root@u1604b1-m2:~# btrfs subvolume create /tank1_clone
Create subvolume '//tank1_clone'

root@u1604b1-m2:~# btrfs send /tank1_snapshot3 | btrfs receive  /tank1_clone
At subvol /tank1_snapshot3
At subvol tank1_snapshot3

This was just my initial look see on what btrfs is capable of and how similar it is to ZFS and ZFS appliance functionality.

So far at least it seems promising that send and receive is being addressed in btrfs but I don't think you can easily roll your own solution for A) replication and B) writable snapshots(clones) with btrfs yet. There will be too much work around building the replication discipline and framework.

A few links that I came across that are useful while looking at btrfs and the topics of replication and database cloning.

1. http://rockstor.com/blog/snapshots/data-replication-with-rockstor/
2. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
3. http://www.cybertec.at/2015/01/forking-databases-the-art-of-copying-without-copying/
4. http://blog.contractoracle.com/2013/02/oracle-database-on-btrfs-reduce-costs.html
5. https://bdrouvot.wordpress.com/2014/04/25/reduce-resource-consumption-and-clone-in-seconds-your-oracle-virtual-environment-on-your-laptop-using-linux-containers-and-btrfs/
6. https://docs.opensvc.com/storage.btrfs.html
7. https://ilmarkerm.eu/blog/2014/08/cloning-pluggable-database-with-custom-snapshot/
8. http://blog.ronnyegner-consulting.de/2010/02/17/creating-database-clones-with-zfs-really-fast/
9. http://www.seedsofgenius.net/solaris/zfs-vs-btrfs-a-reference

Comments Off on Btrfs Replication
comments

Jan 16

Get Third Sunday of The Month

Sometimes you have to rely on a routine to reliably provide dates. In this example I want to know for certain what the third Sunday of every month is. Could be various reasons but mostly for me it is to predict maintenance windows or use during scripting for automation.

$ cat /tmp/thirdweek.py 
from datetime import date, timedelta

def allsundays(year):
   d = date(year, 1, 1)                    # January 1st
   d += timedelta(days = 6 - d.weekday())  # First Sunday
   while d.year == year:
      yield d
      d += timedelta(days = 7)

def SundaysInMonth(year,month):
  d = date(year,month,1)		  # First day of the month
  d += timedelta(days = 6 - d.weekday())  # First Sunday
  while d.year == year and d.month == month:
      yield d
      d += timedelta(days = 7)

def xSundayOfMonth(x,Sundays):
  i=0
  for Sunday in Sundays:
    i=i+1
    if i == x: 
      #print "%s - %s" % (i,Sunday)
      return Sunday

print "All Sundays in Year"
for d in allsundays(2016):
   print d

print "All Third Sundays in Year"
for month in range(1,13): 
  Sundays=SundaysInMonth(2016,month)
  print xSundayOfMonth(3,Sundays)

Comments Off on Get Third Sunday of The Month
comments

Jan 04

Check Logfiles Only a Few Minutes Back

This is an update post. Previously I had a post here: http://blog.ls-al.com/check-logfiles-for-recent-entries-only/

The code has been problematic around when a new year starts because of the lack of a year in the log entries. I updated the code a little bit to account for the year ticking over. I may still need to come up with a better way but below seem to work ok.

#!/usr/bin/python
#

#: Script Name  : checkLogs.py
#: Version      : 0.0.1.1
#: Description  : Check messages for last x minutes.  Used in conjunction with checkLogs.sh and a cron schedule

from datetime import datetime, timedelta

#suppressPhrases = ['ssd','offline']
suppressPhrases = []

#now = datetime(2015,3,17,7,28,00)						## Get time right now. ie cron job execution
now = datetime.now()
day_of_year = datetime.now().timetuple().tm_yday   		## Used for special case when year ticks over. Older log entries should be one year older.

## How long back to check. Making it 11 mins because cron runs every 10 mins
checkBack = 11

lines = []

#print "log entries newer than " + now.strftime('%b %d %H:%M:%S') + " minus " + str(checkBack) + " minutes"

with open('/var/adm/messages', 'r') as f:
    for line in f:
      myDate = str(now.year) + " " + line[:15]          ## Solaris syslog format like this: Mar 11 12:47:23 so need to add year

      if day_of_year >= 1 and day_of_year <= 31:        ## Brain dead log has no year so special case during January
        if not "Jan" in myDate:         #2015 Dec 30
          myDate = str(now.year -1) + " " + line[:15]

      if myDate[3] == " ":								## What about "Mar  1" having double space vs "Mar 15". That will break strptime %d.
        myDate = myDate.replace(myDate[3],"0")			## zero pad string position 4 to make %d work?

      #print "myDate: %s and now: %s" % (myDate,now)
      lt = datetime.strptime(myDate,'%Y %b %d %H:%M:%S')
      diff = now - lt
      if diff.days <= 0:
        if lt > now - timedelta(minutes=checkBack):
          #print myDate + " --- diff: " + str(diff)
          match = False
          for s in suppressPhrases:
            i = line.find(s)
            if i > -1:
              match = True
          if not match:
            lines.append(line)

if lines:
    message = '\n'.join(lines)
    print message										    # do some grepping for my specific errors here.. send message per mail...

Comments Off on Check Logfiles Only a Few Minutes Back
comments

Dec 14

Solaris Multipath Incorrect Totals

From time to time we notice that some LUN's are not optimal. It could be because of off-lining a LUN, changes on the switches I am not sure why exactly it happens. If multipath is not showing the correct Path Counts you may need to run cfgadm.

See how some LUN's here are showing 4 paths only. We expect 8.

# mpathadm list lu
        /dev/rdsk/c0t5000CCA04385ED60d0s2
                Total Path Count: 1
                Operational Path Count: 1
[..]
        /dev/rdsk/c0t600144F09D7311B500005605A40C0006d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000561ED8AB0007d0s2
                Total Path Count: 8
                Operational Path Count: 8
[..]
        /dev/rdsk/c0t600144F09D7311B50000538507080021d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000534309C40011d0s2
                Total Path Count: 4
                Operational Path Count: 4
        /dev/rdsk/c0t600144F09D7311B500005342FE86000Fd0s2
                Total Path Count: 4
                Operational Path Count: 4
        /dev/rdsk/c0t600144F09D7311B5000053D13E130029d0s2
                Total Path Count: 4
                Operational Path Count: 4
        /dev/rdsk/c0t600144F09D7311B50000566AE1CC0008d0s2
                Total Path Count: 4
                Operational Path Count: 4

I tried a few things and it looks like the cfgadm worked. It could also be a couple other thing that triggered it like destroying an unused LUN or changing a recently added LUN's target group to be more restrictive but I doubt that is it. Most likely cfgadm.

# cfgadm -o show_SCSI_LUN -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             fc           connected    unconfigured unknown
c8                             fc-fabric    connected    configured   unknown
c8::20520002ac000f02,254       ESI          connected    configured   unknown
c8::21000024ff3db11d,0         disk         connected    configured   unknown
c8::21000024ff3db11d,1         disk         connected    configured   unknown
[..]
c8::21000024ff57d646,54        disk         connected    configured   unknown
c8::21000024ff57d646,56        disk         connected    configured   unknown
c8::21520002ac000f02,254       ESI          connected    configured   unknown
c8::22520002ac000f02,254       ESI          connected    configured   unknown
c8::23520002ac000f02,254       ESI          connected    configured   unknown
c9                             fc           connected    unconfigured unknown
c13                            fc-fabric    connected    configured   unknown
c13::20510002ac000f02,254      ESI          connected    configured   unknown
c13::21000024ff3db11c,0        disk         connected    configured   unknown
[..]
c13::21000024ff3db11c,44       disk         connected    configured   unknown
c13::21000024ff3db11c,46       disk         connected    configured   unknown
c13::21000024ff3db11c,48       disk         connected    configured   unknown
c13::21000024ff3db11c,50       disk         connected    unconfigured unknown
c13::21000024ff3db11c,52       disk         connected    unconfigured unknown
c13::21000024ff3db11c,54       disk         connected    unconfigured unknown
c13::21000024ff3db11c,56       disk         connected    configured   unknown
c13::21000024ff3db1b4,0        disk         connected    configured   unknown
c13::21000024ff3db1b4,1        disk         connected    configured   unknown
[..]
c13::21510002ac000f02,254      ESI          connected    configured   unknown
c13::22510002ac000f02,254      ESI          connected    configured   unknown
c13::23510002ac000f02,254      ESI          connected    configured   unknown

After I ran cfgadm..

# mpathadm list lu
        /dev/rdsk/c0t5000CCA04385ED60d0s2
                Total Path Count: 1
                Operational Path Count: 1
[..]
        /dev/rdsk/c0t600144F09D7311B500005605A40C0006d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000561ED8AB0007d0s2
                Total Path Count: 8
                Operational Path Count: 8
[..]
        /dev/rdsk/c0t600144F09D7311B5000053BE90620024d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000533C012A0009d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000533AAFF00007d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000538507080021d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000534309C40011d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B500005342FE86000Fd0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B5000053D13E130029d0s2
                Total Path Count: 8
                Operational Path Count: 8
        /dev/rdsk/c0t600144F09D7311B50000566AE1CC0008d0s2
                Total Path Count: 8
                Operational Path Count: 8

I can also see the changes in messages...

# dmesg
Nov  9 03:59:21 solaris11 mac: [ID 469746 kern.info] NOTICE: ldoms-vsw0.vport15 registered
Nov  9 05:41:55 solaris11 scsi: [ID 583861 kern.info] ssd98 at scsi_vhci0: unit-address g600144f09d7311b50000534309c40011: f_tpgs
[..]
Dec 14 09:38:01 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000566ae1cc0008 (ssd101) multipath status: optimal: path 343 fp16/ssd@w21000024ff3db1b5,38 is standby
Dec 14 09:38:01 solaris11 last message repeated 1 time
Dec 14 09:38:01 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b5000053d13e130029 (ssd100) multipath status: optimal: path 344 fp16/ssd@w21000024ff3db1b5,36 is standby
Dec 14 09:38:01 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b500005342fe86000f (ssd99) multipath status: optimal: path 345 fp16/ssd@w21000024ff3db1b5,34 is standby
Dec 14 09:38:01 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000534309c40011 (ssd98) multipath status: optimal: path 346 fp16/ssd@w21000024ff3db1b5,32 is standby
Dec 14 09:38:07 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000566ae1cc0008 (ssd101) multipath status: optimal: path 347 fp16/ssd@w21000024ff3db11d,38 is standby
Dec 14 09:38:07 solaris11 last message repeated 1 time
Dec 14 09:38:07 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b5000053d13e130029 (ssd100) multipath status: optimal: path 348 fp16/ssd@w21000024ff3db11d,36 is standby
Dec 14 09:38:08 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b500005342fe86000f (ssd99) multipath status: optimal: path 349 fp16/ssd@w21000024ff3db11d,34 is standby
Dec 14 09:38:08 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000534309c40011 (ssd98) multipath status: optimal: path 350 fp16/ssd@w21000024ff3db11d,32 is standby
Dec 14 09:38:16 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000566ae1cc0008 (ssd101) multipath status: optimal: path 351 fp21/ssd@w21000024ff3db1b4,38 is standby
Dec 14 09:38:16 solaris11 last message repeated 1 time
Dec 14 09:38:17 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b5000053d13e130029 (ssd100) multipath status: optimal: path 352 fp21/ssd@w21000024ff3db1b4,36 is standby
Dec 14 09:38:17 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b500005342fe86000f (ssd99) multipath status: optimal: path 353 fp21/ssd@w21000024ff3db1b4,34 is standby
Dec 14 09:38:17 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000534309c40011 (ssd98) multipath status: optimal: path 354 fp21/ssd@w21000024ff3db1b4,32 is standby
Dec 14 09:38:22 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000566ae1cc0008 (ssd101) multipath status: optimal: path 355 fp21/ssd@w21000024ff3db11c,38 is standby
Dec 14 09:38:22 solaris11 last message repeated 1 time
Dec 14 09:38:22 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b5000053d13e130029 (ssd100) multipath status: optimal: path 356 fp21/ssd@w21000024ff3db11c,36 is standby
Dec 14 09:38:23 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b500005342fe86000f (ssd99) multipath status: optimal: path 357 fp21/ssd@w21000024ff3db11c,34 is standby
Dec 14 09:38:23 solaris11 genunix: [ID 483743 kern.info] /scsi_vhci/ssd@g600144f09d7311b50000534309c40011 (ssd98) multipath status: optimal: path 358 fp21/ssd@w21000024ff3db11c,32 is standby

Comments Off on Solaris Multipath Incorrect Totals
comments

Dec 10

Solaris Snoop on File Access

If you find yourself trying to figure out where your operating system is spending time with reads and writes try this little dtrace gem. Script is here: http://dtracebook.com/index.php/File_System:rwsnoop

I ran it like below. Unknown is socket access and filtering out ssh and grep explains itself.

# ./rwsnoop.dtrace | egrep -v "sshd|grep|unknown"
  UID    PID CMD          D   BYTES FILE
    0    637 utmpd        R       4 /var/adm/wtmpx
  324   2884 java         W      77 /scratch/agtst1ML/MemoryMonitorLog.log
  324   2884 java         W      77 /scratch/agtst1ML/MemoryMonitorLog.log
  324   2884 java         W      77 /scratch/agtst1ML/MemoryMonitorLog.log
  324   2884 java         W      16 /devices/pseudo/poll@0:poll
  324   2884 java         W       8 /devices/pseudo/poll@0:poll
    1    593 nfsmapid     R      78 /etc/resolv.conf
    1    593 nfsmapid     R       0 /etc/resolv.conf
  324   2884 java         W      77 /scratch/agtst1ML/MemoryMonitorLog.log
    0      1 init         R    1006 /etc/inittab
    0      1 init         R       0 /etc/inittab
    0      1 init         W     412 /etc/svc/volatile/init-next.state
    0      1 init         W     412 /etc/svc/volatile/init-next.state
    0      1 init         R    1006 /etc/inittab
    0      1 init         R       0 /etc/inittab
    1    180 kcfd         R     976 /usr/lib/security/pkcs11_kernel.so.1

Comments Off on Solaris Snoop on File Access
comments