Category: Solaris

Aug 21

Solaris ipadm show-prop and returning only current value

A lot has been written about Solaris 11 networking changes and how to manage it with different commands. This is just a quick note on something specific to what I was trying. Previous versions of Solaris I could do a quick ndd command to get or set a value. For example:

# ndd -get /dev/tcp tcp_smallest_anon_port
9000

And yes above ndd command still works in Solaris 11.

In Solaris 11 most documentation would indicate doing the following:

# ipadm show-prop -p smallest_anon_port tcp
PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   smallest_anon_port    rw   9000         9000         32768        1024-65500

Above is fine if you are checking something but what if you need to just return the current value. You can of course do some string manipulation but if you dig a little deeper ipadm can return exactly what you want as follow. In my case I wanted to have a reliable check so my puppet manifest would not set the value again if already correct(using exec/onlyif in puppet).

Returning only the value with ipadm works as follow:

# ipadm show-prop -o CURRENT -c -p smallest_anon_port tcp
9000

I plan to add more about puppet and Solaris in general later but here is the ipadm check in the puppet manifest for reference.

exec { "ipadm smallest_anon_port tcp":
  command     => "ipadm set-prop -p smallest_anon_port=9000 tcp",
  path        => $execPath,
  onlyif      => "ipadm show-prop -o CURRENT -c -p smallest_anon_port tcp | grep -v 9000"
}

Comments Off on Solaris ipadm show-prop and returning only current value
comments

Apr 27

Solaris Information On WWN

Recently I had some messages on a T5-2 hypervisor and I needed to find out exactly what it is complaining about.

Messages looked like this:

fctl: [ID 517869 kern.warning] WARNING: fp(5)::GPN_ID for D_ID=160003 failed
fctl: [ID 517869 kern.warning] WARNING: fp(5)::N_x Port with D_ID=160003, PWWN=23220002ac0012b4 disappeared from fabric
fctl: [ID 517869 kern.warning] WARNING: fp(6)::GPN_ID for D_ID=c0001 failed
fctl: [ID 517869 kern.warning] WARNING: fp(6)::N_x Port with D_ID=c0001, PWWN=23210002ac0012b4 disappeared from fabric
fctl: [ID 517869 kern.warning] WARNING: fp(6)::N_x Port with D_ID=c0001, PWWN=23210002ac0012b4 reappeared in fabric
fctl: [ID 517869 kern.warning] WARNING: fp(5)::N_x Port with D_ID=160003, PWWN=23220002ac0012b4 reappeared in fabric

First let's check the adapters and ports.

# fcinfo hba-port
HBA Port WWN: 21000024ff4e2a9c
        Port Mode: Initiator
        Port ID: 210600
        OS Device Name: /dev/cfg/c4
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 5.08.00
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402T00-1315130088
        Driver Name: qlc
        Driver Version: 20131114-4.03
        Type: N-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff4e2a9c
        Max NPIV Ports: 255
        NPIV port list:
HBA Port WWN: 21000024ff4e2a9d
        Port Mode: Initiator
        Port ID: 0
        OS Device Name: /dev/cfg/c5
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 5.08.00
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402T00-1315130088
        Driver Name: qlc
        Driver Version: 20131114-4.03
        Type: unknown
        State: offline
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: not established
        Node WWN: 20000024ff4e2a9d
        Max NPIV Ports: 255
        NPIV port list:
HBA Port WWN: 21000024ff4e29d6
        Port Mode: Initiator
        Port ID: 660700
        OS Device Name: /dev/cfg/c6
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 5.08.00
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402T00-1315129988
        Driver Name: qlc
        Driver Version: 20131114-4.03
        Type: N-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff4e29d6
        Max NPIV Ports: 255
        NPIV port list:
HBA Port WWN: 21000024ff4e29d7
        Port Mode: Initiator
        Port ID: 0
        OS Device Name: /dev/cfg/c7
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 5.08.00
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402T00-1315129988
        Driver Name: qlc
        Driver Version: 20131114-4.03
        Type: unknown
        State: offline
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: not established
        Node WWN: 20000024ff4e29d7
        Max NPIV Ports: 255
        NPIV port list:

Let's check which ports are connected.

# luxadm -e port
/devices/pci@340/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl        CONNECTED
/devices/pci@340/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0:devctl        CONNECTED
/devices/pci@340/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl      NOT CONNECTED
/devices/pci@340/pci@1/pci@0/pci@5/SUNW,qlc@0,1/fp@0,0:devctl      NOT CONNECTED

Now let's find devices on the above two connected ports. You should be able to see the original warning WWN's listed here.

# luxadm -e dump_map /devices/pci@340/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    160000  0         20220002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
1    160001  0         21220002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
2    160002  0         22220002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
3    160003  0         23220002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)	<-- Messages show this one
4    210900  0         21000024ff57d60d 20000024ff57d60d 0x0  (Disk device)
5    210a00  0         21000024ff57d64d 20000024ff57d64d 0x0  (Disk device)
6    210b00  0         21000024ff57d649 20000024ff57d649 0x0  (Disk device)
7    210c00  0         21000024ff57d60b 20000024ff57d60b 0x0  (Disk device)
8    210600  0         21000024ff4e2a9c 20000024ff4e2a9c 0x1f (Unknown Type,Host Bus Adapter)

# luxadm -e dump_map /devices/pci@340/pci@1/pci@0/pci@5/SUNW,qlc@0/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    c0001   0         23210002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)	<-- Messages show this one
1    c0002   0         22210002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
2    c0003   0         21210002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
3    c0004   0         20210002ac0012b4 2ff70002ac0012b4 0x0  (Disk device)
4    660000  0         21000024ff57d64c 20000024ff57d64c 0x0  (Disk device)
5    660100  0         21000024ff57d60a 20000024ff57d60a 0x0  (Disk device)
6    660200  0         21000024ff57d648 20000024ff57d648 0x0  (Disk device)
7    660a00  0         21000024ff57d60c 20000024ff57d60c 0x0  (Disk device)
8    660700  0         21000024ff4e29d6 20000024ff4e29d6 0x1f (Unknown Type,Host Bus Adapter)

You can try and see if the device is listed here. In my case it was not so it appears that we are not using this device but maybe the switch still have us zoned for the device.

# luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
  Node WWN:20000024ff57d64d  Device Type:Disk device
    Logical Path:/dev/rdsk/c0t600144F086479F15000053DA5A03000Ad0s2
[..]

# luxadm probe | grep 2ac0012b4  <-- Not found so remote?

We probably did not need all steps above to get to the next command but I am listing everything since it depends on the situation and what you are tracing.

Here I can see the relevant WWN is a 3PAR SAN and in my case I knew I am not using 3PAR any longer so I can check if this is a zoning issue.

# fcinfo remote-port -ls -p 21000024ff4e29d6
Remote Port WWN: 20210002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 0:2:1 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 12
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 48
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses0
Remote Port WWN: 21210002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 1:2:1 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 6
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 32
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses1
Remote Port WWN: 22210002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 2:2:1 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 16
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses2
Remote Port WWN: 23210002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 3:2:1 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 16
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses3
[..]

# fcinfo remote-port -ls -p 21000024ff4e2a9c
Remote Port WWN: 20220002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 0:2:2 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 6
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 32
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses4
Remote Port WWN: 21220002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 1:2:2 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 16
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses5
Remote Port WWN: 22220002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 2:2:2 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 16
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses6
Remote Port WWN: 23220002ac0012b4
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port Symbolic Name: 1404788 - 3:2:2 - LPe12004
        Node WWN: 2ff70002ac0012b4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 16
                Invalid CRC Count: 0
        LUN: 254
          Vendor: 3PARdata
          Product: SES
          OS Device Name: /dev/es/ses7

1
comments

Mar 26

Solaris Change File Ownership as non root Account

If you have a process running as non root or just need to enable a normal user to take ownership of files they don't own this is what you need to do.

My first attempt was changing a file that was owned by root. That is not what I needed but as shown here that requires a privilege called "ALL".

 
$ ppriv -De chown ebs_a /tmp/file1.txt
chown[999]: missing privilege "ALL" (euid = 304, syscall = 16) needed at tmp_setattr+0x60
chown: /tmp/file1.txt: Not owner

This attempt is to change a file owned by nobody and that is what my process will be requiring.

$ ppriv -De chown ebs_a /tmp/file1.txt
chown[1034]: missing privilege "file_chown" (euid = 304, syscall = 16) needed at tmp_setattr+0x60
chown: /tmp/file1.txt: Not owner

So as shown above we needed file_chown. I am adding that privilege as below. You will note I have some other permissions already added for different requirements.

# grep ^ebs_a  /etc/user_attr
ebs_a::::type=normal;defaultpriv=basic,sys_mount,sys_nfs,net_privaddr,file_chown;auths=solaris.smf.manage.xvfb,solaris.smf.value.xvfb

Ok now we try again and it worked.

# su - ebs_a
[..]
$ ppriv -De chown ebs_a /tmp/file1.txt

$ ls -l /tmp/file1.txt
-rw-r--r--   1 ebs_a root           0 Mar 25 06:24 /tmp/file1.txt

And of course you don't need to use ppriv now just simply chown and it should work.

Comments Off on Solaris Change File Ownership as non root Account
comments

Mar 19

Solaris Mount NFS Share as Non Root User

Since it took me a while to get this working I made a note of how. Giving a normal user Primary Administrator Role did work but even the role of System Administrator did not allow me to mount and unmount NFS.

Two Roles I tested:

# grep Adminis /etc/security/prof_attr
[..]
Primary Administrator:::Can perform all administrative tasks:auths=solaris.*,solaris.grant;help=RtPriAdmin.html
Service Operator:::Administer services:auths=solaris.smf.manage,solaris.smf.modify.framework
System Administrator:::Can perform most non-security administrative tasks:profiles=Audit Review,Printer Management,Cron Management,Device Management,File System Management,Mail Management,Maintenance and Repair,Media Backup,Media Restore,Name Service Management,Network Management,Object Access Management,Process Management,Software Installation,User Management,Project Management,All;help=RtSysAdmin.html

The error was like this:

$ pfexec /sbin/mount /apps
nfs mount: insufficient privileges

Below is what I needed to do. The xvfb service had nothing to do with NFS but I needed it for X display so I am just leaving it in.

# cat /etc/user_attr
[..]
ebs_a::::type=normal;defaultpriv=basic,sys_mount,sys_nfs,net_privaddr;auths=solaris.smf.manage.xvfb,solaris.smf.value.xvfb

$ ppriv $$
28423:  -bash
flags = <none>
        E: basic,net_privaddr,sys_mount,sys_nfs
        I: basic,net_privaddr,sys_mount,sys_nfs
        P: basic,net_privaddr,sys_mount,sys_nfs
        L: all
$ pfexec /sbin/umount /apps
$ pfexec /sbin/mount /apps

$ pfexec svcadm disable svc:/application/xvfb:default
$ pfexec svcadm enable svc:/application/xvfb:default

1
comments

Mar 11

Solaris Boot Environment Size

If you have wondered why your root file system (RPOOL) on Solaris is out of space and you have double checked and none of the usual culprits are eating your space, you may need to look if perhaps you captured something large in your snapshots. For the most part I am not taking snapshots of the root file system but if you do Solaris updates (SRU) you are definitely using snapshots. This is what I had to do to reclaim space.

Warning you are on your own if you ruin your boot OS. I have the luxury of the root OS being on a SAN LUN that have separate snapshot technology and can recover quick in case something went wrong.

Space Before

# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/solaris-3    24G   9.9G       3.9M   100%    /

Lets create a new environment

# beadm create solaris-4
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 NR     /          20.76G static 2014-11-21 17:42
solaris-4 -      -          74.0K  static 2015-03-11 12:57
# beadm activate solaris-4
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          67.0K  static 2014-11-21 17:42
solaris-4 R      -          20.76G static 2015-03-11 12:57

As you can see that did nothing for us. It carried the 20G over.

# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-4@install                   941M      -  2.03G  -
rpool/ROOT/solaris-4@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-4@2015-03-11-19:57:34       170K      -  2.31G  -
rpool/ROOT/solaris-4/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-4/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-4/var@2015-03-11-19:57:34    77K      -   146M  -

This piece was redundant but since I was not sure if maybe I left a solaris-4 BE plus snapshot on this system from experiments before I wanted one I know for sure is brand new.

# beadm create solaris-5
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-3@2015-03-11-20:02:33          0      -  2.31G  -
rpool/ROOT/solaris-3/var@2015-03-11-20:02:33      0      -   146M  -
rpool/ROOT/solaris-4@install                   941M      -  2.03G  -
rpool/ROOT/solaris-4@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-4@2015-03-11-19:57:34       170K      -  2.31G  -
rpool/ROOT/solaris-4/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-4/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-4/var@2015-03-11-19:57:34    77K      -   146M  -
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          790.0K static 2014-11-21 17:42
solaris-4 R      -          20.76G static 2015-03-11 12:57
solaris-5 -      -          75.0K  static 2015-03-11 13:02
# beadm activate solaris-5
# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 N      /          67.0K  static 2014-11-21 17:42
solaris-4 -      -          260.0K static 2015-03-11 12:57
solaris-5 R      -          20.76G static 2015-03-11 13:02
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-19:57:34       133K      -  2.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33        64K      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-19:57:34   211K      -   146M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33    48K      -   146M  -

reboot

# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 -      -          8.46M  static 2014-11-21 17:42
solaris-4 -      -          260.0K static 2015-03-11 12:57
solaris-5 NR     /          20.79G static 2015-03-11 13:02

# beadm destroy solaris-4
Are you sure you want to destroy solaris-4?  This action cannot be undone(y/[n]): y
# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33      29.8M      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33  2.15M      -   146M  -

# beadm list -a solaris-5
BE/Dataset/Snapshot                             Active Mountpoint Space   Policy Created
-------------------                             ------ ---------- -----   ------ -------
solaris-5
   rpool/ROOT/solaris-5                         NR     /          11.51G  static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var                     -      /var       349.62M static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var@2014-06-17-16:56:58 -      -          110.55M static 2014-06-17 09:56
   rpool/ROOT/solaris-5/var@2015-03-11-20:02:33 -      -          2.15M   static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var@install             -      -          90.82M  static 2013-07-09 10:30
   rpool/ROOT/solaris-5@2014-06-17-16:56:58     -      -          7.82G   static 2014-06-17 09:56
   rpool/ROOT/solaris-5@2015-03-11-20:02:33     -      -          29.77M  static 2015-03-11 13:02
   rpool/ROOT/solaris-5@install                 -      -          941.40M static 2013-07-09 10:30

# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 29.77M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30

reboot

# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-1 -      -          8.48M  static 2014-01-16 16:52
solaris-3 -      -          8.46M  static 2014-11-21 17:42
solaris-5 NR     /          20.86G static 2015-03-11 13:02

# zfs list -t snapshot
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-5@install                   941M      -  2.03G  -
rpool/ROOT/solaris-5@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-5@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-5/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-5/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-5/var@2015-03-11-20:02:33  5.00M      -   146M  -

# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 47.98M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30

Ok the previous part was completely redundant but I am leaving it in since it might be valuable in general. Lets now create a clean BE from a fresh snapshot. This may also be redundant but it is worthwhile to go through this.

# beadm create solaris-5@now
# beadm list -s solaris-5
BE/Snapshot                      Space   Policy Created
-----------                      -----   ------ -------
solaris-5
   solaris-5@2014-06-17-16:56:58 7.82G   static 2014-06-17 09:56
   solaris-5@2015-03-11-20:02:33 47.98M  static 2015-03-11 13:02
   solaris-5@install             941.40M static 2013-07-09 10:30
   solaris-5@now                 0       static 2015-03-11 13:22

# beadm create -e solaris-5@now solaris-clean
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-1     -      -          8.48M  static 2014-01-16 16:52
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     NR     /          20.86G static 2015-03-11 13:02
solaris-clean -      -          71.0K  static 2015-03-11 13:23

# beadm activate solaris-clean
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-1     -      -          8.48M  static 2014-01-16 16:52
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          20.86G static 2015-03-11 13:23

# zfs list -t snapshot
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install                   941M      -  2.03G  -
rpool/ROOT/solaris-clean@2014-06-17-16:56:58      7.82G      -  9.31G  -
rpool/ROOT/solaris-clean@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-clean@now                       166K      -  2.31G  -
rpool/ROOT/solaris-clean/var@install              90.8M      -  96.7M  -
rpool/ROOT/solaris-clean/var@2014-06-17-16:56:58   111M      -   139M  -
rpool/ROOT/solaris-clean/var@2015-03-11-20:02:33  5.00M      -   146M  -
rpool/ROOT/solaris-clean/var@now                    75K      -   145M  -

And this is the heart of what needs to happen. As I said you were warned. You better know what you are doing as well as have a fallback. This is your boot volume.  Lets just get rid of that old snapshot from 2014.

# zfs destroy rpool/ROOT/solaris-clean@2014-06-17-16:56:58
cannot destroy 'rpool/ROOT/solaris-clean@2014-06-17-16:56:58': snapshot has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/solaris-1/var
rpool/ROOT/solaris-1

# zfs destroy -R rpool/ROOT/solaris-clean@2014-06-17-16:56:58
# zfs list -t snapshot
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install                  1.01G      -  2.03G  -
rpool/ROOT/solaris-clean@2015-03-11-20:02:33      48.0M      -  2.31G  -
rpool/ROOT/solaris-clean@now                       166K      -  2.31G  -
rpool/ROOT/solaris-clean/var@install              91.2M      -  96.7M  -
rpool/ROOT/solaris-clean/var@2015-03-11-20:02:33  5.00M      -   146M  -
rpool/ROOT/solaris-clean/var@now                    75K      -   145M  -
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-3     -      -          8.46M  static 2014-11-21 17:42
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          4.99G  static 2015-03-11 13:23

Since we are cleaning up lets just get rid of this old BE also.

# beadm destroy solaris-3
Are you sure you want to destroy solaris-3?  This action cannot be undone(y/[n]): y
# beadm list
BE            Active Mountpoint Space  Policy Created
--            ------ ---------- -----  ------ -------
solaris-5     N      /          123.0K static 2015-03-11 13:02
solaris-clean R      -          4.89G  static 2015-03-11 13:23

reboot

# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/solaris-clean
                        24G   2.3G       8.5G    22%    /
[..]

<strong>Finally things look much better.</strong>

# beadm list
BE            Active Mountpoint Space Policy Created
--            ------ ---------- ----- ------ -------
solaris-5     -      -          8.25M static 2015-03-11 13:02
solaris-clean NR     /          4.95G static 2015-03-11 13:23
# beadm list -s
BE/Snapshot              Space  Policy Created
-----------              -----  ------ -------
solaris-5
solaris-clean
   solaris-clean@install 1.01G  static 2013-07-09 10:30
   solaris-clean@now     29.76M static 2015-03-11 13:22
# beadm list -a
BE/Dataset/Snapshot                     Active Mountpoint Space   Policy Created
-------------------                     ------ ---------- -----   ------ -------
solaris-5
   rpool/ROOT/solaris-5                 -      -          5.84M   static 2015-03-11 13:02
   rpool/ROOT/solaris-5/var             -      -          2.40M   static 2015-03-11 13:02
solaris-clean
   rpool/ROOT/solaris-clean             NR     /          3.59G   static 2015-03-11 13:23
   rpool/ROOT/solaris-clean/var         -      /var       238.33M static 2015-03-11 13:23
   rpool/ROOT/solaris-clean/var@install -      -          91.21M  static 2013-07-09 10:30
   rpool/ROOT/solaris-clean/var@now     -      -          2.21M   static 2015-03-11 13:22
   rpool/ROOT/solaris-clean@install     -      -          1.01G   static 2013-07-09 10:30
   rpool/ROOT/solaris-clean@now         -      -          29.76M  static 2015-03-11 13:22

# zfs list -t snapshot
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris-clean@install      1.01G      -  2.03G  -
rpool/ROOT/solaris-clean@now          29.8M      -  2.31G  -
rpool/ROOT/solaris-clean/var@install  91.2M      -  96.7M  -
rpool/ROOT/solaris-clean/var@now      2.21M      -   145M  -

Comments Off on Solaris Boot Environment Size
comments

Feb 18

Expanding a Solaris RPOOL

For reference I have a couple older articles on this topic here:

Growing a Solaris LDOM rpool

ZFS Grow rpool disk

Specifically this article is what I did recently on a SPARC LDOM to expand the RPOOL. The RPOOL OS disk is a SAN shared LUN in this case.

After growing the LUN to 50G on the back-end I did the following. You may have to try more than once. For me it did not work at first and I don't know the sequence but I tried a combination of reboot, zpool status, label and verify and it worked. And yes I did say zpool status. I have had issues with upgrades in the past where beadm did not activate a new environment and zpool status resolved it.

Also you will notice my boot partition was already an EFI label. I don't recall where but somewhere along the lines in Solaris 11.1 EFI labels became possible. If you have a SMI label you may have to try a different approach. And as always tinkering with partitions and disk labels is dangerous so you are warned.

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# format -e
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-ZFS Storage 7330-1.0-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <Unknown-Unknown-0001-50.00GB>
          /virtual-devices@100/channel-devices@200/disk@1
Specify disk (enter its number): 0
selecting c1d0
[disk formatted]
/dev/dsk/c1d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

[..]

format> verify

Volume name = <        >
ascii name  = <SUN-ZFS Storage 7330-1.0-50.00GB>
bytes/sector    =  512
sectors = 104857599
accessible sectors = 104857566
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       49.99GB          104841182
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  7 unassigned    wm                 0           0               0
  8   reserved    wm         104841183        8.00MB          104857566

[..]

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]:
Ready to label disk, continue? y

format> q

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  29.8G  27.6G  2.13G  92%  1.00x  ONLINE  -

# zpool set autoexpand=on rpool

# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
app    49.8G  12.9G  36.9G  25%  1.00x  ONLINE  -
rpool  49.8G  27.6G  22.1G  55%  1.00x  ONLINE  -

Comments Off on Expanding a Solaris RPOOL
comments

Jan 14

Solaris 11.2 SRU updates

Previously I wrote about updating Solaris 11.1 without going full blown IPS. Oracle used to have incremental ISO's to download. I do not see those anymore for Solaris 11.2. Here is a previous article for reference:

Solaris 11.1 Update from ISO

In a data center I would maintain an IPS repo and incrementally update the repo when SRU's are released. Guests are then updated against the repo. If you don't want to maintain full blown IPS and just want to update a specific guest to a specific SRU version this article is focusing on that.

Some useful links you may want to look through first:
Create Local Repo:
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/local-repository-2245081.html
http://www.oracle.com/technetwork/articles/servers-storage-admin/howto-set-up-repos-mirror-ips-2266101.html
http://appsdbaworkshop.blogspot.com/2014/09/configure-ips-repository-on-oracle.html

SRU Index:
Oracle Solaris 11.1 Support Repository Updates (SRU) Index (Doc ID 1501435.1)
Oracle Solaris 11.2 Support Repository Updates (SRU) Index (Doc ID 1672221.1)
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=390313668034834&id=1672221.1&displayIndex=8&_afrWindowMode=0&_adf.ctrl-state=psan9sn51_234

We will focus on these steps:
Step 1. Stage SRU zip files and turn into a repo with install-repo.ksh.

Step 2. Update guest against repo.

Step 3. Optionally create an ISO.

Step 4: Update from custom ISO created in step 3.

I use Virtualbox for my testing so you can choose how to present folders or ISO's to the guest. Since I have the guest additions on Solaris 11.2 guest I chose to use that instead of one of the links shows screenshots to create a ISO in Windows and present to Virtualbox that way.

Step 1: Stage SRU zip files and turn into a repo with install-repo.ksh.

Download and stage SRU zip files:

# ls -l 11.2_SRU5.5_src/
total 5370490
-rwxrwx---   1 root     vboxsf   1539220019 Jan  7 09:43 p20131812_1100_Solaris86-64_1of2.zip
-rwxrwx---   1 root     vboxsf   1210469972 Jan  7 09:37 p20131812_1100_Solaris86-64_2of2.zip

Download install-repo.ksh script and md5sums.txt:
** Note I initially ignored md5sums.txt and that will cause a warning only.  Better off just downloading that as well.

# ls -l install-repo.ksh
-rwxrwx---   1 root     vboxsf      5594 Jan  9 08:20 install-repo.ksh

Prepare repo:

# pwd
/mnt/sf_DATA/isos/Solaris
# mkdir IPS_SRU_11_2_5_5_0
# pwd
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5

# ../install-repo.ksh -d ../IPS_SRU_11_2_5_5_0
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5_src/sol-*-repo-md5sums.txt: No such file or directory
Uncompressing p20131812_1100_Solaris86-64_1of2.zip...done.
Uncompressing p20131812_1100_Solaris86-64_2of2.zip...done.</pre>
<pre>Repository can be found in ../IPS_SRU_11_2_5_5_0. 

** When I used a Virtualbox guest additions  mount as a target for the repo the install will work but I had issues later updating the guest.
** Also keep in mind with this method of updating you still need the Oracle repo as well to solve any dependencies introduced in the SRU.

# pwd
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5

# pkg set-publisher -G '*' -g file:////mnt/sf_DATA/isos/Solaris/IPS_SRU_11_2_5_5_0/ solaris
pkg set-publisher: The origin URIs for 'solaris' do not appear to point to a valid pkg repository.
Please verify the repository's location and the client's network configuration.
Additional details:

Unable to contact valid package repository
Encountered the following error(s):
Unable to contact any configured publishers.
This is likely a network configuration problem.
file protocol error: code: 71 reason: The following catalog files have incorrect permissions:
	/mnt/sf_DATA/isos/Solaris/IPS_SRU_11_2_5_5_0/publisher/solaris/catalog/catalog.attrs: expected mode: 644, found mode: 770

** Redid the repo prep to local / file system. I had enough room there so it worked fine.

# pwd
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5
# ../install-repo.ksh -d /IPS_SRU_11_2_5_5_0
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5/sol-*-repo-md5sums.txt: No such file or directory
Uncompressing p20131812_1100_Solaris86-64_1of2.zip...done.
Uncompressing p20131812_1100_Solaris86-64_2of2.zip...done.
Repository can be found in /IPS_SRU_11_2_5_5_0.

Step 2: Update guest against repo.

# pkg set-publisher -g file:////IPS_SRU_11_2_5_5_0/ solaris

# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F file:///IPS_SRU_11_2_5_5_0/
solaris                     origin   online F http://pkg.oracle.com/solaris/release/

# pkg update -nv

# pkg update
            Packages to remove:   1
            Packages to update: 153
       Create boot environment: Yes
Create backup boot environment:  No
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            154/154     9014/9014  345.9/345.9    0B/s

PHASE                                          ITEMS
Removing old actions                       1742/1742
Installing new actions                     2635/2635
Updating modified actions                  8361/8361
Updating package state database                 Done
Updating package cache                       154/154
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           1/1 

A clone of solaris-1 exists and has been updated and activated.
On the next boot the Boot Environment solaris-2 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

Updating package cache                           1/1 

---------------------------------------------------------------------------
NOTE: Please review release notes posted at:

http://www.oracle.com/pls/topic/lookup?ctx=solaris11&id=SERNS
---------------------------------------------------------------------------

# beadm list
BE                 Active Mountpoint Space  Policy Created
--                 ------ ---------- -----  ------ -------
solaris-1          N      /          29.69M static 2014-07-31 15:01
solaris-1-backup-1 -      -          183.0K static 2014-08-28 09:39
solaris-1-backup-2 -      -          191.0K static 2014-12-02 12:05
solaris-2          R      -          13.49G static 2015-01-09 10:21

Step 3: Optionally create an ISO.

# pwd
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5
# mkdir /IPS_SRU_11_2_5_5_0
# ./install-repo.ksh -d /IPS_SRU_11_2_5_5_0 -I -v
Uncompressing p20131812_1100_Solaris86-64_1of2.zip...done.
Uncompressing p20131812_1100_Solaris86-64_2of2.zip...done.
Repository can be found in /IPS_SRU_11_2_5_5_0.
Initiating repository verification.
Building ISO image...
...done.
ISO image and instructions for using the ISO image are at:
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5/sol-11_2-repo.iso
/mnt/sf_DATA/isos/Solaris/11.2_SRU5.5/README-repo-iso.txt

Step 4: Update from custom ISO created in step 3.
** Attach ISO to guest

root@solaris11:~# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.2.0.0.42.0    i--

root@solaris11:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://pkg.oracle.com/solaris/release/

root@solaris11:~# pkg set-publisher -g file:///media/SOL-11_2_REPO/repo solaris

root@solaris11:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F file:///media/SOL-11_2_REPO/repo/
solaris                     origin   online F http://pkg.oracle.com/solaris/release/
</pre>
<pre>root@solaris11:~# pkg update
</pre>
<pre>root@solaris11:~# pkg list entire NAME (PUBLISHER) VERSION IFO entire 0.5.11-0.175.2.5.0.5.0 i-- 

Comments Off on Solaris 11.2 SRU updates
comments

Dec 06

Live Migrate Oracle VM for SPARC Logical Domains

Short version of live migration between two T5-2's.  Using shared fiber LUN's.

Source Host:

Check running status:

# ldm ls | grep test10
test10           active     -n----  5002    8     8G       0.0%  0.0%  1m

Dry run first:

# ldm migrate-domain -n test10 10.2.10.12
Target Password:
Invalid shutdown-group: 0
Failed to recreate domain on target machine
Domain Migration of LDom test10 would fail if attempted

We have a couple issues.

# ldm set-domain shutdown-group=15 test10

# ldm migrate-domain -n test10 10.2.10.12
Target Password:
Failed to find required volume test10-disk0@primary-vds0 on target machine
Domain Migration of LDom test10 would fail if attempted

# ldm ls-bindings primary | grep test10
test10@primary-vcc          5002   on
vnet1@test10                00:14:4f:fa:17:4f 1913                      1500
test10-disk0                                   /dev/dsk/c0t600144F09D7311B5000054789ED30002d0s2
disk0@test10                test10-disk0

Target Host:
Fix the virtual device reference.

# ldm add-vdsdev /dev/dsk/c0t600144F09D7311B5000054789ED30002d0s2 test10-disk0@primary-vds0

# ldm migrate-domain -n test10 10.2.10.12

Check status on target.  I suggest also running a ping during the whole process.

# ldm ls | grep test10 test10           bound      -----t  5018    8     8G
# ldm ls | grep test10 test10           active     -n----  5018    8     8G       0.1%  0.1%  7m

Comments Off on Live Migrate Oracle VM for SPARC Logical Domains
comments

Sep 19

Solaris Ipfilter Pools

I wasn't aware before that ipfilter (ipf) has a concept of pools.  In other words list of IP addresses etc..

I previously had this basic article on enabling ipf in Solaris and following here is a little on pools.

** Note this was a Solaris 10 LDOM so therefore NIC was vnet0. You have to check your NIC it's most likely net0 in Solaris 11.

Setup the pools you need as follow.

# pwd
/etc/ipf
# cat ippool.conf
### Pool 13 some essential static addresses
table role = ipf type = tree number = 13
{ 10.1.11.34/32, 10.2.10.6/32 };
### Pool 14 some temporary IP's
table role = ipf type = tree number = 14
{ 192.168.8.0/24, 10.200.97.82/32 };

Use the pools in your ipf.conf.

# cat ipf.conf
[...]
pass in quick on lo0 all
pass out quick on lo0 all

### Block all inbound and outbound traffic by default
block in log on vnet0 all head 100
block out log on vnet0 all head 150

### Allow inbound SSH connections
pass in quick on vnet0 proto tcp from any to 10.1.11.87 port = 22 keep state group 100

### Use /etc/ipf/ippool.conf for pools
pass in on vnet0 from pool/13 group 100
pass in on vnet0 from pool/14 group 100

### Allow my box to utilize all UDP, TCP and ICMP services
pass out quick all

Of course flush and reload from file.

# ipf -Fa -f /etc/ipf/ipf.conf

Check the running set.

# ipfstat -io
pass out quick on lo0 all
block out log on vnet0 all head 150
pass out quick all
# Group 150
pass in quick on lo0 all
block in log on vnet0 all head 100
# Group 100
pass in quick on vnet0 proto tcp from any to 10.1.11.87/32 port = ssh keep state group 100
pass in on vnet0 from pool/13 to any group 100
pass in on vnet0 from pool/14 to any group 100

Note that updating the ippools you might need to reload also.

# ippool -F; ippool -f /etc/ipf/ippool.conf

For me that did not always work so I also did.

# svcadm disable ipfilter
# svcadm refresh ipfilter
# svcadm enable ipfilter

Listing the pools will save you a lot of time root causing rules that are actually correct.

# ippool -l
table role = ipf type = tree number = 14
        { 192.168.8.0/24; 10.200.97.82/32; };
table role = ipf type = tree number = 13
        { 10.1.11.34/32; 10.2.10.6/32 };

As always with firewalls test test test.

Comments Off on Solaris Ipfilter Pools
comments

Jun 18

Solaris 11.1 Update from ISO

Sometimes you don't have a Solaris IPS local repo and just want to update to a newer SRU (Support Repository Update). You can check versions at Oracle support and last check this Doc ID contained a good list: Oracle Solaris 11.1 Support Repository Updates (SRU) Index (Doc ID 1501435.1)

Few things to note:

- In this example I updated from SRU 18.5 to SRU 19.6. Most of my updates was actually all the way from the GA release to the latest SRU. And for me I had to have both the Oracle online repo as well as the local incremental SRU set for the update to catch all possible dependencies.

- If updating to a latest SRU and coming from many versions back you might also see something similar to below when the update tries to activate the new BE (boot environment):
Error while accessing "/dev/rdsk/c2d1s0": No such file or directory
pkg: unable to activate solaris-1

I have not 100% figured out why this is happening and if its just related to LDOM's but so far once or twice when this occurred either one of the following or a combination of the following allowed me to manually activate the BE.  Reboot the updated guest, just doing a simple zpool status, destroying the newly created BE and redoing the update.  Like I said all or one of the above steps.  I have a suspicion its as simple as doing a zfs status and then the activate worked.

Update 7.18.14: The last upgrade I did I encountered the unable to activate error and a simple zpool status allowed me to do beadm activate.

Lets start by checking the existing version. This indicates we are at SRU 18.5 in this case.

# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.1.18.0.5.0    i--

Since I have the luxury of staging on NFS we might as well mount the repo direct. Another option since this is a LDOM is to add a virtual cdrom the guest. Note below I am setting both the incremental repo as well as the Oracle support repo. Use pkg unset-publisher to clear entries you don't want.

# mount -F hsfs /software/solaris/sol-11_1_19_6_0-incr-repo.iso /mnt
# pkg set-publisher -g file:///mnt/repo solaris
# pkg set-publisher -P -g http://pkg.oracle.com/solaris/release/ solaris
# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F file:///mnt/repo/
solaris                     origin   online F http://pkg.oracle.com/solaris/release/

Now lets do the update. Since the README for SRU 19.6 explained license around java we need to include the --accept flag. Be warned the README might contain more information you need to adhere to for a successful update. In my case to be extra safe even though Solaris can maintain multiple BE's (boot environments), I also made a snapshot of the OS on the storage back end.

# pkg update --accept
           Packages to install:   1
            Packages to update:  72
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              73/73     2018/2018    99.8/99.8    0B/s

PHASE                                          ITEMS
Removing old actions                         238/238
Installing new actions                       277/277
Updating modified actions                  3265/3265
Updating package state database                 Done
Updating package cache                         72/72
Updating image state                            Done
Creating fast lookup database                   Done

A clone of solaris-new-2 exists and has been updated and activated.
On the next boot the Boot Environment solaris-new-3 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.


---------------------------------------------------------------------------
NOTE: Please review release notes posted at:

https://support.oracle.com/epmos/faces/DocContentDisplay?id=1501435.1
---------------------------------------------------------------------------

Lets take a look at the boot environments before reboot.

# beadm list
BE              Active Mountpoint Space  Policy Created
--              ------ ---------- -----  ------ -------
solaris-new-1   -      -          13.82M static 2014-01-30 08:27
solaris-new-2   N      /          3.13M  static 2014-05-19 06:37
solaris-new-3   R      -          12.37G static 2014-06-18 04:55
solaris-orig    -      -          11.73M static 2013-07-09 10:26
solaris-sru14.5 -      -          19.87M static 2014-01-29 06:07
# reboot

After a reboot the Solaris version and BE looks like this.

# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.1.19.0.6.0    i--
# beadm list
BE              Active Mountpoint Space  Policy Created
--              ------ ---------- -----  ------ -------
solaris-new-1   -      -          13.82M static 2014-01-30 08:27
solaris-new-2   -      -          14.45M static 2014-05-19 06:37
solaris-new-3   NR     /          12.49G static 2014-06-18 04:55
solaris-orig    -      -          11.73M static 2013-07-09 10:26
solaris-sru14.5 -      -          19.87M static 2014-01-29 06:07

Comments Off on Solaris 11.1 Update from ISO
comments