Mar 26

Solaris Change File Ownership as non root Account

If you have a process running as non root or just need to enable a normal user to take ownership of files they don't own this is what you need to do.

My first attempt was changing a file that was owned by root. That is not what I needed but as shown here that requires a privilege called "ALL".

 
$ ppriv -De chown ebs_a /tmp/file1.txt
chown[999]: missing privilege "ALL" (euid = 304, syscall = 16) needed at tmp_setattr+0x60
chown: /tmp/file1.txt: Not owner

This attempt is to change a file owned by nobody and that is what my process will be requiring.

$ ppriv -De chown ebs_a /tmp/file1.txt
chown[1034]: missing privilege "file_chown" (euid = 304, syscall = 16) needed at tmp_setattr+0x60
chown: /tmp/file1.txt: Not owner

So as shown above we needed file_chown. I am adding that privilege as below. You will note I have some other permissions already added for different requirements.

# grep ^ebs_a  /etc/user_attr
ebs_a::::type=normal;defaultpriv=basic,sys_mount,sys_nfs,net_privaddr,file_chown;auths=solaris.smf.manage.xvfb,solaris.smf.value.xvfb

Ok now we try again and it worked.

# su - ebs_a
[..]
$ ppriv -De chown ebs_a /tmp/file1.txt

$ ls -l /tmp/file1.txt
-rw-r--r--   1 ebs_a root           0 Mar 25 06:24 /tmp/file1.txt

And of course you don't need to use ppriv now just simply chown and it should work.

Comments Off on Solaris Change File Ownership as non root Account
comments

Mar 26

Nagios Email Notifications with Comments

If you prefer more detail in the email notification of Nagios here is how to add those comments. For example someone acknowledging a service and you want to see the comment they added during acknowledgement.

# pwd
/usr/local/nagios/etc/objects
# more commands.cfg
[..]
define command{
        command_name    notify-service-by-email
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n\nACK Comment: $SERVICEACKCOMMENT$\n\nComment: $NOTIFICATIONCOMMENT$\n" | /usr/bin/mailx -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

Note: I added $SERVICEACKCOMMENT$ and $NOTIFICATIONCOMMENT$ but I think in newer versions of Nagios $SERVICEACKCOMMENT$ is deprecated and you only need $NOTIFICATIONCOMMENT$.

Link: http://nagios.sourceforge.net/docs/3_0/macrolist.html

Comments Off on Nagios Email Notifications with Comments
comments

Mar 19

Solaris Mount NFS Share as Non Root User

Since it took me a while to get this working I made a note of how. Giving a normal user Primary Administrator Role did work but even the role of System Administrator did not allow me to mount and unmount NFS.

Two Roles I tested:

# grep Adminis /etc/security/prof_attr
[..]
Primary Administrator:::Can perform all administrative tasks:auths=solaris.*,solaris.grant;help=RtPriAdmin.html
Service Operator:::Administer services:auths=solaris.smf.manage,solaris.smf.modify.framework
System Administrator:::Can perform most non-security administrative tasks:profiles=Audit Review,Printer Management,Cron Management,Device Management,File System Management,Mail Management,Maintenance and Repair,Media Backup,Media Restore,Name Service Management,Network Management,Object Access Management,Process Management,Software Installation,User Management,Project Management,All;help=RtSysAdmin.html

The error was like this:

$ pfexec /sbin/mount /apps
nfs mount: insufficient privileges

Below is what I needed to do. The xvfb service had nothing to do with NFS but I needed it for X display so I am just leaving it in.

# cat /etc/user_attr
[..]
ebs_a::::type=normal;defaultpriv=basic,sys_mount,sys_nfs,net_privaddr;auths=solaris.smf.manage.xvfb,solaris.smf.value.xvfb

$ ppriv $$
28423:  -bash
flags = <none>
        E: basic,net_privaddr,sys_mount,sys_nfs
        I: basic,net_privaddr,sys_mount,sys_nfs
        P: basic,net_privaddr,sys_mount,sys_nfs
        L: all
$ pfexec /sbin/umount /apps
$ pfexec /sbin/mount /apps

$ pfexec svcadm disable svc:/application/xvfb:default
$ pfexec svcadm enable svc:/application/xvfb:default

1
comments

Mar 15

Check Logfiles For Recent Entries Only

Frequently I have a cron job to check for specific entries in log files but want to avoid being notified of something already checked. For example I want my 10 minute cron job to only look for most recent 10 minute entries.

Here is what I did in python.

from datetime import datetime, timedelta

## Get time right now. ie cron job execution
#now = datetime(2015,3,15,8,55,00)
now = datetime.now()

## How long back to check. Making it 11 mins because cron runs every 10 mins
checkBack = 11

lines = []

print "log entries newer than " + now.strftime('%b %d %H:%M:%S') + " minus " + str(checkBack) + " minutes"

with open('/var/log/syslog', 'r') as f:
    for line in f:
      ## Linux syslog format like this:
      ## Mar 15 08:50:01 EP45-DS3L postfix/sendmail[6492]: fatal
      ## Brain dead log has no year. So this hack will not work close to year ticking over
      myDate = str(now.year) + " " + line[:15]

      ## What about "Mar  1" having double space vs "Mar 15". That will break strptime %d.
      ## zero pad string position 4 to make %d work?
      if myDate[3] == " ":
        myDate = myDate.replace(myDate[3],"0")

      lt = datetime.strptime(myDate,'%Y %b %d %H:%M:%S')
      diff = now - lt
      if diff.days <= 0:
        if lt > now - timedelta(minutes=checkBack):
          # print myDate + " --- diff: " + str(diff)
          lines.append(line)

if lines:
    # message = '\n'.join(lines)
    # do some grepping for my specific errors here..
    # send message per mail...

Just for reference here is an older test where no year is used. This is doing a string compare but I have not tested this one good enough. Most likely it will fail when month ticks over Apr will not be bigger than Mar. Also what about midnight 23:59 > 00:00?

from datetime import datetime, timedelta
now = datetime.now()
lookback = timedelta(minutes=5)

## Linux syslog format "Mar 15 07:30:10 ..."
## Probably need to zero pad string position 4 to make %d work?
oldest = (now - lookback).strftime('%b %d %H:%M:%S')

with open('/var/log/syslog', 'r') as f:
    for line in f:
        if line[:15] > oldest:
          print "entry: " + line[:15] + " --- " + line[16:50]

Comments Off on Check Logfiles For Recent Entries Only
comments

Mar 11

Solaris Boot Environment Size

If you have wondered why your root file system (RPOOL) on Solaris is out of space and you have double checked and none of the usual culprits are eating your space, you may need to look if perhaps you captured something large in your snapshots. For the most part I am not taking snapshots of the root file system but if you do Solaris updates (SRU) you are definitely using snapshots. This is what I had to do to reclaim space.

Warning you are on your own if you ruin your boot OS. I have the luxury of the root OS being on a SAN LUN that have separate snapshot technology and can recover quick in case something went wrong.

Space Before

# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/solaris-3    24G   9.9G       3.9M   100%    /

Lets create a new environment

# beadm create solaris-4
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 NR     /          20.76G static 2014-11-21 17:42
solaris-4 -      -          74.0K  static 2015-03-11 12:57
# beadm activate solaris-4
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          67.0K  static 2014-11-21 17:42
solaris-4 R      -          20.76G static 2015-03-11 12:57

As you can see that did nothing for us. It carried the 20G over.

# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-4@install                   941M      -  2.03G  -
rpool/ROOT/solaris-4@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-4@2015-03-11-19:57:34       170K      -  2.31G  -
rpool/ROOT/solaris-4/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-4/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-4/var@2015-03-11-19:57:34    77K      -   146M  -

This piece was redundant but since I was not sure if maybe I left a solaris-4 BE plus snapshot on this system from experiments before I wanted one I know for sure is brand new.

# beadm create solaris-5
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-3@2015-03-11-20:02:33          0      -  2.31G  -
rpool/ROOT/solaris-3/var@2015-03-11-20:02:33      0      -   146M  -
rpool/ROOT/solaris-4@install                   941M      -  2.03G  -
rpool/ROOT/solaris-4@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-4@2015-03-11-19:57:34       170K      -  2.31G  -
rpool/ROOT/solaris-4/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-4/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-4/var@2015-03-11-19:57:34    77K      -   146M  -
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          790.0K static 2014-11-21 17:42
solaris-4 R      -          20.76G static 2015-03-11 12:57
solaris-5 -      -          75.0K  static 2015-03-11 13:02
# beadm activate solaris-5
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          67.0K  static 2014-11-21 17:42
solaris-4 -      -          260.0K static 2015-03-11 12:57
solaris-5 R      -          20.76G static 2015-03-11 13:02
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-19:57:34       133K      -  2.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33        64K      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-19:57:34   211K      -   146M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33    48K      -   146M  -

reboot

# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 -      -          8.46M  static 2014-11-21 17:42
solaris-4 -      -          260.0K static 2015-03-11 12:57
solaris-5 NR     /          20.79G static 2015-03-11 13:02

# beadm destroy solaris-4
Are you sure you want to destroy solaris-4?  This action cannot be undone(y/[n]): y
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33      29.8M      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33  2.15M      -   146M  -

# beadm list -a solaris-5
BE/Dataset/Snapshot                             Active Mountpoint Space   Policy Created
-------------------                             ------ ---------- -----   ------ -------
solaris-5
   rpool/ROOT/solaris-5                         NR     /          11.51G  static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var                     -      /var       349.62M static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var@2014-06-17-16:56:58 -      -          110.55M static 2014-06-17 09:56
   rpool/ROOT/solaris-5/var@2015-03-11-20:02:33 -      -          2.15M   static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var@install             -      -          90.82M  static 2013-07-09 10:30
   rpool/ROOT/solaris-5@2014-06-17-16:56:58     -      -          7.82G   static 2014-06-17 09:56
   rpool/ROOT/solaris-5@2015-03-11-20:02:33     -      -          29.77M  static 2015-03-11 13:02
   rpool/ROOT/solaris-5@install                 -      -          941.40M static 2013-07-09 10:30

# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 29.77M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30

reboot

# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 -      -          8.46M  static 2014-11-21 17:42
solaris-5 NR     /          20.86G static 2015-03-11 13:02

# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33  5.00M      -   146M  -

# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 47.98M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30

Ok the previous part was completely redundant but I am leaving it in since it might be valuable in general. Lets now create a clean BE from a fresh snapshot. This may also be redundant but it is worthwhile to go through this.

# beadm create solaris-5@now
# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 47.98M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30
   solaris-5@now                 0       static 2015-03-11 13:22

# beadm create -e solaris-5@now solaris-clean
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-1     -      -          8.48M  static 2014-01-16 16:52
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     NR     /          20.86G static 2015-03-11 13:02
solaris-clean -      -          71.0K  static 2015-03-11 13:23

# beadm activate solaris-clean
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-1     -      -          8.48M  static 2014-01-16 16:52
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          20.86G static 2015-03-11 13:23

# zfs list -t snapshot
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install                   941M      -  2.03G  -
rpool/ROOT/solaris-clean@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-clean@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-clean@now                       166K      -  2.31G  -
rpool/ROOT/solaris-clean/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-clean/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-clean/var@2015-03-11-20:02:33  5.00M      -   146M  -
rpool/ROOT/solaris-clean/var@now                    75K      -   145M  -

And this is the heart of what needs to happen. As I said you were warned. You better know what you are doing as well as have a fallback. This is your boot volume.  Lets just get rid of that old snapshot from 2014.

# zfs destroy rpool/ROOT/solaris-clean@2014-06-17-16:56:58
cannot destroy 'rpool/ROOT/solaris-clean@2014-06-17-16:56:58': snapshot has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/solaris-1/var
rpool/ROOT/solaris-1

# zfs destroy -R rpool/ROOT/solaris-clean@2014-06-17-16:56:58
# zfs list -t snapshot
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install                  1.01G      -  2.03G  -
rpool/ROOT/solaris-clean@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-clean@now                       166K      -  2.31G  -
rpool/ROOT/solaris-clean/var@install              91.2M      -  96.7M  -
rpool/ROOT/solaris-clean/var@2015-03-11-20:02:33  5.00M      -   146M  -
rpool/ROOT/solaris-clean/var@now                    75K      -   145M  -
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          4.99G  static 2015-03-11 13:23

Since we are cleaning up lets just get rid of this old BE also.

# beadm destroy solaris-3
Are you sure you want to destroy solaris-3?  This action cannot be undone(y/[n]): y
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          4.89G  static 2015-03-11 13:23

reboot

# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/solaris-clean
                        24G   2.3G       8.5G    22%    /
[..]

<strong>Finally things look much better.</strong>

# beadm list
BE            Active Mountpoint Space Policy Created
--            ------ ---------- ----- ------ -------
solaris-5     -      -          8.25M static 2015-03-11 13:02
solaris-clean NR     /          4.95G static 2015-03-11 13:23
# beadm list -s
BE/Snapshot              Space  Policy Created
-----------              -----  ------ -------
solaris-5
solaris-clean
   solaris-clean@install 1.01G  static 2013-07-09 10:30
   solaris-clean@now     29.76M static 2015-03-11 13:22
# beadm list -a
BE/Dataset/Snapshot                     Active Mountpoint Space   Policy Created
-------------------                     ------ ---------- -----   ------ -------
solaris-5
   rpool/ROOT/solaris-5                 -      -          5.84M   static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var             -      -          2.40M   static 2015-03-11 13:02
solaris-clean
   rpool/ROOT/solaris-clean             NR     /          3.59G   static 2015-03-11 13:23
   rpool/ROOT/solaris-clean/var         -      /var       238.33M static 2015-03-11 13:23
   rpool/ROOT/solaris-clean/var@install -      -          91.21M  static 2013-07-09 10:30
   rpool/ROOT/solaris-clean/var@now     -      -          2.21M   static 2015-03-11 13:22
   rpool/ROOT/solaris-clean@install     -      -          1.01G   static 2013-07-09 10:30
   rpool/ROOT/solaris-clean@now         -      -          29.76M  static 2015-03-11 13:22

# zfs list -t snapshot
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install      1.01G      -  2.03G  -
rpool/ROOT/solaris-clean@now          29.8M      -  2.31G  -
rpool/ROOT/solaris-clean/var@install  91.2M      -  96.7M  -
rpool/ROOT/solaris-clean/var@now      2.21M      -   145M  -

Comments Off on Solaris Boot Environment Size
comments

Feb 18

Expanding a Solaris RPOOL

For reference I have a couple older articles on this topic here:

Growing a Solaris LDOM rpool

ZFS Grow rpool disk

Specifically this article is what I did recently on a SPARC LDOM to expand the RPOOL. The RPOOL OS disk is a SAN shared LUN in this case.

After growing the LUN to 50G on the back-end I did the following. You may have to try more than once. For me it did not work at first and I don't know the sequence but I tried a combination of reboot, zpool status, label and verify and it worked. And yes I did say zpool status. I have had issues with upgrades in the past where beadm did not activate a new environment and zpool status resolved it.

Also you will notice my boot partition was already an EFI label. I don't recall where but somewhere along the lines in Solaris 11.1 EFI labels became possible. If you have a SMI label you may have to try a different approach. And as always tinkering with partitions and disk labels is dangerous so you are warned.

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# format -e
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-ZFS Storage 7330-1.0-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <Unknown-Unknown-0001-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@1
Specify disk (enter its number): 0
selecting c1d0
[disk formatted]
/dev/dsk/c1d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

[..]

format> verify

Volume name = <        >
ascii name  = <SUN-ZFS Storage 7330-1.0-50.00GB>
bytes/sector    =  512
sectors = 104857599
accessible sectors = 104857566
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       49.99GB          104841182
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  7 unassigned    wm                 0           0               0
  8   reserved    wm         104841183        8.00MB          104857566

[..]

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]:
Ready to label disk, continue? y

format> q

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# zpool set autoexpand=on rpool

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  49.8G  27.6G  22.1G  55%  1.00x  ONLINE  -

Comments Off on Expanding a Solaris RPOOL
comments

Feb 06

Papyros Shell

Since I want to capture what I did to take a look at the initial stages Papyros shell I am jotting down what worked for me.  It sounds like soon the developers will have working downloadable images to try so its worth waiting for those.  Getting ArchLinux going in Virtualbox is out of scope.

At this point though I just wanted to see what it looks like.  Sounds like the best option should be the powerpack route: https://github.com/papyros/powerpack
Unfortunately that has a bug right now: https://github.com/papyros/powerpack/issues/6

I tried some of the other options also but ultimately below is the only one that worked for me.

Get yaourt going: https://www.digitalocean.com/community/tutorials/how-to-use-yaourt-to-easily-download-arch-linux-community-packages)

yaourt -S qt5-base
yaourt -S qt5-wayland-dev-git
yaourt -S qt5-declarative-git
yaourt -S qt-settings-bzr

mkdir Papyros
cd Papyros

git clone https://github.com/papyros/qml-extras
cd qml-extras
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/qml-material
cd qml-material
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/qml-desktop
cd qml-desktop
qmake
make
sudo make install
cd ../

git clone https://github.com/papyros/papyros-shell
cd papyros-shell

qmake

** Had to fix multiple file not found references here ~/Papyros/papyros-shell/papyros-shell.qrc
** Just find the correct location under Papyros tree and update references.
** Plus comment out InteractiveNotification.qml with <!-- .. >

make
./papyros-shell -platform xcb

papyros_shell

2
comments

Feb 04

Libvirt and QCow2 Snapshots

This article is for one specific use case I had.  There is a lot more written around snapshots, qemu and libvirt.  For my use case a qcow2 formatted image and KVM virtual machine.  I wanted a way to capture machine state before an OS upgrade.  So that I can roll back or revert in case something goes wrong.  So for my case I do not care about live snapshots.  I can shut the machine down before taking a consistent snapshot of the image.

Specifically I am using what is called an internal qcow2 snapshot.

https://libvirt.org/formatsnapshot.html

# virsh snapshot-create-as debian1 pre-update
Domain snapshot pre-update created
# virsh snapshot-list debian1
 Name                 Creation Time             State
------------------------------------------------------------
 pre-update           2015-02-04 09:22:18 -0600 shutoff

# virsh snapshot-current debian1
<domainsnapshot>
  <name>pre-update</name>
  <state>shutoff</state>
  <creationTime>1423063338</creationTime>
  <memory snapshot='no'/>
  <disks>
    <disk name='vda' snapshot='internal'/>
    <disk name='hda' snapshot='no'/>
  </disks>
[..]

Start VM and make some changes...and...Shutdown

# virsh snapshot-revert debian1 pre-update

Now start VM and changes should be gone.

** Note when I created the snapshots using virsh they did not show up in virt-manager. Probably needs a refresh in virt-manager. Appears like a bug. When I create a new snapshot in virt-manager I can see both(virsh created and virt-manager created). Or cwhen I ompletely quit virt-manager and re-open I see all.

Deletion of an old snapshot:

# virsh snapshot-delete debian1 pre-update
Domain snapshot pre-update deleted

Comments Off on Libvirt and QCow2 Snapshots
comments

Jan 23

Python Output Align by Column

I have used several ways in the past to have very simple column headers and subsequent output lines line up. Of course everything can be static but I like to have column headers adjust their size based on some of the data.

For example if I have column headers in an array like this:
header = ['File Name','File Size']

I want to use this array to print the header line but before printing it I want to adjust the column width for 'File Name' to the longest file name that is in the list yet to be processed. The following example is one way of doing it.

I played with other options also(see update 01.24.15 below).

Honestly I think the main reason for doing it dynamically is I like to print the headers and data with minimal lines of code.

#!/usr/bin/env python
import os,glob,sys
FORMAT = [('File Name',10),('Size',12)]

## In a real use case this function will probably do a lot more
## for example MB to GB conversions etc.
## But important here is where I make the adjustment to the main array
## to stretch the 'File Name' field.
def makeEntry(fileN,fileS):
  global FORMAT
  entry = []
  entry = [fileN, fileS]

  if len(fileN) > FORMAT[0][1]:
    n = dict(FORMAT)			# tuples are immutable so have to change to something writeable
    n['File Name'] = len(fileN) + 2	# expanding field width
    FORMAT = n.items()			# overwrite original

  return entry

os.chdir('/DATA/')
video_files = []
for files in ('*.m4v', '*.mp4', '*.ts','*.dvr-ms','*.avi','*.mkv'):
  video_files.extend(glob.glob(files))

## Loop through files and build an array
fArr = []
for fileN in video_files:
  fArr.append(makeEntry(fileN, os.path.getsize(fileN)))

## Print header
for item in FORMAT:
  print "{header:{width}}".format (header=item[0], width=item[1]),
print

## Print entries
for entry in fArr:
  #print ("%-" + str(FORMAT[0][1]) + "s %-" + str(FORMAT[1][1]) + "s") % (entry[0], entry[1])		# old style
  print "{f0:{w0}} {f1:{w1}}".format ( f0=entry[0], w0=FORMAT[0][1], f1=entry[1], w1=FORMAT[1][1] )	# new style

Update 01.24.15
Above example is fine except I did not like the way I updated the tuple. Below example use a dict for the header.

#!/usr/bin/env python
import os,glob,sys

FORMAT = {'File Name':10,'Size':12} 

def makeEntry(fileN,fileS):
  global FORMAT
  entry = []
  entry = [fileN, fileS]
  if len(fileN) > FORMAT['File Name']:
    FORMAT['File Name'] = len(fileN) + 2	# expanding field width
  return entry

os.chdir('/DATA/')
video_files = []
for files in ('*.m4v', '*.mp4', '*.ts','*.dvr-ms','*.avi','*.mkv'):
  video_files.extend(glob.glob(files))

fArr = []
for fileN in video_files:
  fArr.append(makeEntry(fileN, os.path.getsize(fileN)))

## Print header.
print "{header:{width}}".format (header='File Name', width=FORMAT['File Name']),
print "{header:{width}}".format (header='Size', width=FORMAT['Size'])
print

for entry in fArr:
  print "{f0:{w0}} {f1:{w1}}".format ( f0=entry[0], w0=FORMAT['File Name'], f1=entry[1], w1=FORMAT['Size'] )

Comments Off on Python Output Align by Column
comments

Jan 15

Dictionaries or Associative Arrays in Python

I have a couple previous articles around a similar topic but since I have not added any python code here is a short how to for future reference. It may be called associative arrays in other languages but Python calls it dicts or dictionaries.

Related links:

Creating a javascript array with one to many type relationship

Multi-Array in Bash

Multidimensional array in python

Manually constructing the dictionary:

mailLists = {}
mailLists['dba'] = ['joe', 'jim', 'jeff', 'john']
mailLists['sa'] = ['mike', 'matt' ]

#print mailLists

#for key in mailLists:
#  print key

print mailLists['dba']
print mailLists['sa']

for k,v in mailLists.iteritems():
  print k + ": " + str(v)

For me the real value in this is for example when you build the lists. Think looping through an input file with lines like this:
jon : dba

Now you add jon to the array with key 'dba'. If the key does not exist we simply add one. If the key exists a new item is added to it.

mailLists = {}

with open('b.txt') as input_file:
  for i, line in enumerate(input_file):
    #print line,
    v,k = line.strip().split(':')
    k = k.strip()
    if k not in mailLists.keys():
      mailLists[k] = []
    mailLists[k].append(v)

for k,v in mailLists.iteritems():
  print k + ": " + str(v)
# python b.py
dba: ['jon ', 'jeff ']
sa: ['matt ']

Comments Off on Dictionaries or Associative Arrays in Python
comments