Jul 10

Solaris Display xterm Remote

I had to do the following to display a X terminal remotely.

Solaris 10:
This initial Solaris install was done with the X packages, so I did not need to install anything specific for xauth or xterm.


$ ssh -X root@t41-ldom2
Password:
Last login: Wed Jul 10 10:07:30 2013
/usr/openwin/bin/xauth: creating new authority file /root/.Xauthority
Oracle Corporation SunOS 5.10 Generic Patch January 2005
root@ldom2:~# /usr/openwin/bin/xterm

Solaris 11:

First install a couple packages.  If you previously installed more of the X packages you might not need these two.

 


root@ldom1:~# pkg install pkg:/terminal/xterm@271-0.175.1.0.0.24.1317

root@ldom1:~# pkg install  pkg:/x11/session/xauth@1.0.7-0.175.1.0.0.24.1317

Login with -X


$ ssh -X root@t41-ldom1
Password:
Last login: Wed Jul 10 10:11:49 2013
Oracle Corporation SunOS 5.11 11.1 September 2012
You have new mail.
root@ldom1:~# xterm

Comments Off on Solaris Display xterm Remote
comments

Jul 10

Using Bash for root user on Solaris 10

By default Solaris 10 use "/" as root's home directory and plain sh as a shell.  If you want to change to using the /root directory as home and bash as a shell for more consistency with Solaris 11 you can do the following.

...
Oracle Corporation SunOS 5.10 Generic Patch January 2005

root@ldom2:~# grep root /etc/passwd
root:x:0:0:Super-User:/root:/usr/bin/bash

root@ldom2:~# mkdir /root
root@ldom2:~# pwd
/root

root@ldom2:~# cat .profile
#
# Simple profile places /usr/bin at front, followed by /usr/sbin.
#
# Use less(1) or more(1) as the default pager for the man(1) command.
#
export PATH=/usr/bin:/usr/sbin

if [ -f /usr/bin/less ]; then
 export PAGER="/usr/bin/less -ins"
elif [ -f /usr/bin/more ]; then
 export PAGER="/usr/bin/more -s"
fi

#
# Define default prompt to <username>@<hostname>:<path><"($|#) ">
# and print '#' for user "root" and '$' for normal users.
#
# Currently this is only done for bash/pfbash(1).
#

case ${SHELL} in
*bash)
 typeset +x PS1="\u@\h:\w\\$ "
 ;;
esac

root@ldom2:~# cat .bashrc
#
# Define default prompt to <username>@<hostname>:<path><"($|#) ">
# and print '#' for user "root" and '$' for normal users.
#

typeset +x PS1="\u@\h:\w\\$ "

Logout and back in and your shell should be bash and prompt fixed as well.

Comments Off on Using Bash for root user on Solaris 10
comments

Jul 01

Solaris 11 enable root user

Solaris use Role Based Access which is a better way to allow system access to defined users and only escalate permissions through roles.  Kind of like sudo.

If you have an environment where you just don't care and want users to access the system like in a traditional root manner you can do the following:

# rolemod -K type=normal root
# grep PermitRoot /etc/ssh/sshd_config
PermitRootLogin yes
# svcadm refresh svc:/network/ssh:default

Comments Off on Solaris 11 enable root user
comments

Jun 23

Virtualbox Guest Additions Linux

If you experience issues with installing the Virtualbox Guest Additions it could be that the build environment, dkms or kernel headers are not installed.

Messages typically look something like the following:

/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again.  Stop

Install the following packages and retry the Guest Additions install:

# apt-get install build-essential dkms linux-headers-generic

Comments Off on Virtualbox Guest Additions Linux
comments

Jun 16

Ssh tunnelling via intermediate host

I recently needed to copy files using scp, while not able to copy directly to the target host.  I had to use an intermediate firewall host.  There is a few ways to get this done and most requires netcat (nc) on the intermediate host for copying.

Keep in mind using -t for just a ssh shell connection will work:

$ ssh -t rrosso@backoffice.domain.com ssh admin@10.24.0.200

If needing scp below is a way to get this done when netcat is not a possibility.

In a new terminal do this (command won't return a prompt and leave the terminal open):

$ ssh rrosso@backoffice.domain.com -L 2000:10.24.0.200:22 -N

In a new terminal ssh as follow:

$ ssh -p 2000 admin@localhost

Scp as follow:

$ scp -P 2000 testfile admin@localhost:/tmp

Sftp also possible:

$ sftp -P 2000 admin@localhost

Update 1:  Above will work fine but you can also consider the following to make things more transparent.

$ vi .ssh/config
Host *
 ServerAliveCountMax 4
 #Note default is 3
 ServerAliveInterval 15
 #Note default is 0
#snip
host work-tunnel
 hostname backoffice.domain.com
 port 22

 # SSH Server
 LocalForward localhost:2000 10.24.0.200:22
 user rrosso

# Aliases as follow
host myhost.domain.com
 hostname localhost
 port 2000
 user admin

Then run the tunnel connect first (use ssh -v while still troubleshooting):

$ ssh work-tunnel

Leave above terminal open to leave tunnel going. And now you can run commands in new terminals with syntax as if no tunnel required.

$ scp testfile myhost.domain.com:/tmp
$ ssh myhost.domain.com

That should do it for a ssh shells.

Example for other ports:

Note you can do a lot of other ports also in similar fashion.  Here is an example you could play with.

Host workTunnel
    Host ssh.domain.com
    Port 5001
    # SMTP Server
    LocalForward localhost:2525 smtp.domain.com:25
    # Corporate Wiki.  Using IP address to show that you can.
    LocalForward localhost:8080 192.168.0.110:8080
    # IMAP Mail Server
    LocalForward locahost:1430  imap.pretendco.com:143
    # Subversion Server
    LocalForward locahost:2222  svn.pretendco.com:22
    # NFS Server
    LocalForward locahost:2049  nfs.pretendco.com:2049
    # SMB/CIFS Server
    LocalForward locahost:3020  smb.pretendco.com:3020
    # SSH Server
    LocalForward locahost:2220  dev.pretendco.com:22
    # VNC Server
    LocalForward locahost:5900  dev.pretendco.com:5900

### Hostname aliases ###
### These allow you to mimic hostnames as they appear at work.
### Note that you don't need to use a FQDN; you can use a short name.
Host smtp.domain.com
    HostName localhost
    Port 2525
Host wiki.domain.com
    HostName localhost
    Port 8080
Host imap.domain.com
    HostName localhost
    Port 1430
Host svn.domain.com
    HostName localhost
    Port 2222
Host nfs.domain.com
    HostName localhost
    Port 2049
Host smb.domain.com
    HostName localhost
    Port 3020
Host dev.domain.com
    HostName localhost
    Port 2220
Host vnc.domain.com
    HostName localhost
    Port 5900

Comments Off on Ssh tunnelling via intermediate host
comments

Jun 11

Ubuntu root on ZFS upgrading kernels

Update 1:

On a subsequent kernel upgrade to 3.8.0.25 I realized that the zfs modules are getting compiled just fine when the kernel is upgraded.  On Ubuntu 13.04 anyhow.  So all you need to do is fix the grub.cfg file because of the bug mentioned below where "/ROOT/ubuntu-1@" is inserted twice.  Quick fix you could use sed but be careful to verify your temporary file before copying in place:

# sed 's/\/ROOT\/ubuntu-1\/@//g' /boot/grub/grub.cfg > /tmp/grub.cfg
# cp /tmp/grub.cfg /boot/grub/grub.cfg

I left the rest of the initial post below in case I ever need to really boot with a live CD and redo kernel modules from scratch.   Or maybe that would also work for upgrading the zfs modules from git I suppose.

Original post below:

This is a follow on to my running Ubuntu on a ZFS root file system http://blog.ls-al.com/booting-ubuntu-on-a-zfs-root-file-system/ article.  I decided to document how to do a kernel upgrade in this configuration.  It serves two scenarios A) user decided kernel upgrade or B) accidentally upgraded the kernel and now I can't boot any longer.  I wish I could say I wrote this article in response to number 1 but no it was in response to number 2.

In this case the kernel was upgraded from 3.8.0.19 to 3.8.0.23.

Boot a live cd to start.

Install zfs module:

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap ubuntu-zfs

# modprobe zfs
# dmesg | grep ZFS:
ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Import pool, mount and chroot:

# zpool import -d /dev/disk/by-id -R /mnt rpool
# mount /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 /mnt/boot/grub

# mount --bind /dev /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# chroot /mnt /bin/bash --login

Distribution Upgrade:

# locale-gen en_US.UTF-8
# apt-get update
# apt-get dist-upgrade

** At this point remove any old kernels manually if you want.

Fix grub:

# grub-probe /
 zfs

# ls /boot/grub/i386-pc/zfs*
 /boot/grub/i386-pc/zfs.mod /boot/grub/i386-pc/zfsinfo.mod

# update-initramfs -c -k all

# update-grub

# grep boot=zfs /boot/grub/grub.cfg

** Currently a bug in the Ubuntu 13.04 grub scripts. Look at boot lines there is a duplicated string /ROOT/ubuntu-1/@/ROOT/ubuntu-1@/ in there.

According to https://github.com/zfsonlinux/zfs/issues/1441 it can be fixed with grub-install below.  That did not work for me though I fixed it manually.

# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252)
 Installation finished. No error reported.

Unmount and reboot:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool

2
comments

May 25

Sun ZFS Storage Appliance Simulator on OVM or KVM

For those familiar with the excellent ZFS file system and the Sun(now Oracle) storage products built on ZFS, the ZFS storage appliance interface is very easy to use; and definitely worth considering when looking at when purchasing a SAN.

Oracle has a simulator virtual machine to try out the interface.  Unfortunately it only runs on Virtualbox which is fine for those running Virtualbox on a desktop.  If you would like to run it on something more accessible by multiple users(KVM or OVM); the Solaris based image has some issues running.

I recently got the Virtualbox image to run on OVM and subsequently also got it to work on KVM.  This is a quick guide how to get the Virtualbox image to run as a qcow2 image on a KVM hypervisor.
Update: Changed to qed format. If you don't have qed, qcow2 worked for me also.

As I understand there was also a vmware image but it disappeared from the Oracle website.  I am not sure why Oracle does not publish at least OVM images or make an effort to run the simulator on OVM.  Maybe there is a good reason and it's possible that Oracle want to discourage it being used other than on Virtualbox.  Really not sure.

Stage the Image:
Download the simulator (link should be on this page somewhere): http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-appliance-software/overview/index.html

From the vbox-2011.1.0.0.1.1.8 folder copy the Sun ZFS Storage 7000-disk1.vmdk file to the KVM host and convert to qcow2.

** Note my first attempt I used qcow and not qcow2 format and had issues starting the image so make sure and convert to qcow2.

# qemu-img convert "Sun ZFS Storage 7000-disk1.vmdk" -O qed SunZFSStorage7000-d1.qed
# image: SunZFSStorage7000-d1.qed
file format: qed
virtual size: 20G (21474836480 bytes)
disk size: 1.9G
cluster_size: 65536

Create Guest:
Create KVM guest. Use ide disk for SunZFSStorage7000-disk1.qed and specify qed format.

# virsh dumpxml ZfsApp

  ZfsApp
...

</pre>
<address> </address>
<pre>
...

</pre>
<address> </address>
<pre>
...

Boot new Virtual Machine from the sol-11_1-text-x86.iso. Choose language etc. Select shell when menu appears.

Update ZFS Image:

Now import and mount the ZFS file system.  Find the correct device name and update bootenv.rc:

In my case the disk device name for the boot disk is c7d0.  I use format to see the disk device name and then find the correct slice for the root partition.  You can use "par" and "pr" commands in format to see partitions.  In my case we are after /dev/dsk/c7d0s0 and we need to find the correct entry in the /device tree.

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c7d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
Specify disk (enter its number): ^C

# ls -l /dev/dsk/c7d0s0
lrwxrwxrwx 1 root root 50 May 26 10:35 /dev/dsk/c7d0s0 -> ../../devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

From above I found the exact device name: /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

Lets go update the bootenv.rc now.

# zpool import -f system

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
system 19.9G 2.03G 17.8G 10% 1.00x ONLINE -

# zfs list | grep root
system/ak-nas-2011.04.24.1.0_1-1.8/root 1.26G 14.4G 1.25G legacy

# mkdir /a
# mount -F zfs system/ak-nas-2011.04.24.1.0_1-1.8/root /a

# zfs set readonly=off system/ak-nas-2011.04.24.1.0_1-1.8/root

# cp /etc/path_to_inst /a/etc
# vi /a/boot/solaris/bootenv.rc
...
setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

# tail -1 /a/boot/solaris/bootenv.rc
setprop boot /devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:a

# bootadm update-archive -R /a
updating /a/platform/i86pc/boot_archive
updating /a/platform/i86pc/amd64/boot_archive

# cd /
root@solaris:~/# umount /a
root@solaris:~/# zpool export system

# init 0

On next boot lets make sure Solaris detect the hardware correct.  When you see grub booting edit the kernel boot line and add "-arvs".  Then continue booting.

Probably only need to add "-r" but I did "-arvs" to see more and also get into single user mode in case I needed to do more.

Once in single user mode with prompt just reboot.

For me at this point the image was booting into the zfs appliance setup and I could configure it.  Also on KVM adding SATA disks worked and the zfs appliance interface could use them for pools.

Update 04.23.14
I recently tried to update a simulator running on KVM but the update did not want to complete. Not sure if too many versions elapsed or something it does not like about being under KVM. Anyhow I tried moving an up to date (ak-2013.06.05.1.1) image on Virtualbox to KVM and that did work.

12
comments

May 14

ZFS on Linux resize rpool

In a previous article I setup Ubuntu 13.04 to run off a ZFS root pool.  During my setup I used 4G out of an 8G disk for rpool only and mentioned we can just resize later.   Turns out ZFS on linux have a bug and to get autoextend to work you need to do an extra step.

Note: Since I could not find a tool (including parted) to resize a ZFS physical partition; and my parition layout on the main disk was simple enough; I just ended up booting a livecd and deleting sda2 and creating it bigger.

Partitions before I started:

# fdisk -l
...
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       96389       48163+  be  Solaris boot
/dev/sda2           96390     7903979     3903795   bf  Solaris

Partitions after I deleted and recreated sda2:

# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
...
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       96389       48163+  be  Solaris boot
/dev/sda2           96390    16777215     8340413   bf  Solaris

First boot after partition change:

# df -h
Filesystem           Size  Used Avail Use% Mounted on
rootfs               3.7G  3.1G  580M  85% /

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  3.69G  3.06G   638M    83%  1.00x  ONLINE  -

# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool                3.06G   580M    31K  /rpool
rpool/ROOT           3.06G   580M    31K  /rpool/ROOT
rpool/ROOT/ubuntu-1  3.06G   580M  3.06G  /

Try a normal autoexpand:

# zpool get autoexpand rpool
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default

# zpool set autoexpand=on rpool

# zpool get autoexpand rpool
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local

Then I tried a reboot to see if zfs will pickup the autoexpand change.  That did not work and rebooting is most likely not necessary at all.

I found a bug on ZFS on linux list:
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=commitdiff;h=3b2e400c94eb488cff53cf701554c26d5ebe52e4

Then tried onlining the rpool and it worked.

# zpool online -e rpool /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.94G  3.07G  4.87G    38%  1.00x  ONLINE  -

# df -h | grep rpool
rpool/ROOT/ubuntu-1  7.9G  3.1G  4.8G  40% /
rpool                4.8G     0  4.8G   0% /rpool
rpool/ROOT           4.8G     0  4.8G   0% /rpool/ROOT

# df -h | grep rootfs
rootfs               7.9G  3.1G  4.8G  40% /

Comments Off on ZFS on Linux resize rpool
comments

May 14

Booting Ubuntu on a ZFS Root File System

I recently tried booting Ubuntu 13.04 with a zfs root file system; after the zfs 0.6.1 kernel module was released.  I set this up as a guest on Virtualbox.  Please read the instructions here for Ubuntu:

https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem

Above link is very accurate for Ubuntu but I had just a couple small differences for Ubuntu 13.04 64-bit.

System Requirements

  • 64-bit Ubuntu Live CD. (Not the alternate installer, and not the 32-bit installer!)
  • AMD64 or EM64T compatible computer. (ie: x86-64)
  • 8GB disk storage available.
  • 2GB memory minimum.
  • Virtualbox 4.2.12

Step 1: Prepare The Install Environment

1.1 Start the Ubuntu LiveCD and open a terminal at the desktop.

1.2 Switch to a text terminal using Control-F2.  Shutdown X and Unity.  Input these commands at the terminal prompt:

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap ubuntu-zfs

1.3 Check that the ZFS filesystem is installed and available:

# modprobe zfs
# dmesg | grep ZFS:
ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Step 2: Disk Partitioning

2.1 Use fdisk to create two partitions (100MB boot and 4G root) on the primary storage device.  You can expand the root volume later.  Device was /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252 in my case.  Your disk names will vary.

The partition table should look like this:

root@ubuntu:~# fdisk -l /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252
Disk /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252: 8589 MB, 8589934592 bytes
Device Boot Start End Blocks Id System
/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 * 63 96389 48163+ be Solaris boot
/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2 96390 7903979 3903795 bf Solaris

Step 3: Disk Formatting

3.1 Format the small boot partition created by Step 2.2 as a filesystem that has stage1 GRUB support like this:

# mke2fs -m 0 -L /boot/grub -j /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1

3.2 Create the root pool on the larger partition:

# zpool create -o ashift=9 rpool /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part2

3.3 Create a "ROOT" filesystem in the root pool:

# zfs create rpool/ROOT

3.4 Create a descendant filesystem for the Ubuntu system:

# zfs create rpool/ROOT/ubuntu-1

3.5 Dismount all ZFS filesystems.

# zfs umount -a

3.6 Set the mountpoint property on the root filesystem:

# zfs set mountpoint=/ rpool/ROOT/ubuntu-1

3.7 Set the bootfs property on the root pool.

# zpool set bootfs=rpool/ROOT/ubuntu-1 rpool

3.8 Export the pool:

# zpool export rpool

Don't skip this step. The system is put into an inconsistent state if this command fails or if you reboot at this point.

Step 4: System Installation

4.1 Import the pool:

# zpool import -d /dev/disk/by-id -R /mnt rpool

4.2 Mount the small boot filesystem for GRUB that was created in step 3.1:

# mkdir -p /mnt/boot/grub
# mount /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 /mnt/boot/grub

4.4 Install the minimal system:

# debootstrap raring /mnt

Step 5: System Configuration

5.1 Copy these files from the LiveCD environment to the new system:

# cp /etc/hostname /mnt/etc/
# cp /etc/hosts /mnt/etc/

5.2 The /mnt/etc/fstab file should be empty except for a comment. Add this line to the /mnt/etc/fstab file:

/dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1  /boot/grub  auto  defaults  0  1

5.3 Edit the /mnt/etc/network/interfaces file so that it contains something like this:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

Customize this file if the new system is not a DHCP client on the LAN.  After you have rebooted and you want network-manager to manage your network connection remove eth0 again.

5.4 Make virtual filesystems in the LiveCD environment visible to the new system and chroot into it:

# mount --bind /dev  /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys  /mnt/sys
# chroot /mnt /bin/bash --login

5.5 Install PPA support in the chroot environment like this:

# locale-gen en_US.UTF-8
# apt-get update
# apt-get install ubuntu-minimal software-properties-common

5.6 Install ZFS in the chroot environment for the new system:

# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic
# apt-get install ubuntu-zfs
# apt-get install grub2-common grub-pc
# apt-get install zfs-initramfs
# apt-get dist-upgrade

Choose /dev/sda if prompted to install the MBR loader.

5.7 Set a root password on the new system:

# passwd root

Hint: If you want the ubuntu-desktop package, then install it after the first reboot. If you install it now, then it will start several process that must be manually stopped before dismount.

Step 6: GRUB Installation

Remember: All of Step 6 depends on Step 5.4 and must happen inside the chroot environment.

6.1 Verify that the ZFS root filesystem is recognized by GRUB and that the ZFS modules for GRUB are installed:

# grub-probe /
zfs

# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfs.mod  /boot/grub/i386-pc/zfsinfo.mod</code>

6.2 Refresh the initrd files:

# update-initramfs -c -k all

6.3 Update the boot configuration file:

# update-grub

Verify that boot=zfs appears in the boot configuration file:

# grep boot=zfs /boot/grub/grub.cfg
linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.8.0-19-generic root=/dev/sda2 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro single nomodeset boot=zfs $bootfs

Update 1:

If you have issues booting on Ubuntu 13.04 I temporarily fixed two lines manually for the first boot and then once logged in I fixed /boot/grub/grub.cfg as follow for subsequent boots:

#linux /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/vmlinuz-3.8.0-19-generic root=ZFS=rpool/ROOT/ubuntu-1/@/ROOT/ubuntu-1 ro boot=zfs $bootfs quiet splash $vt_handoff
linux /ROOT/ubuntu-1/@/boot/vmlinuz-3.8.0-19-generic root=/dev/sda2 ro boot=zfs $bootfs quiet splash $vt_handoff

#initrd /ROOT/ubuntu-1/@/ROOT/ubuntu-1@//boot/initrd.img-3.8.0-19-generic
initrd /ROOT/ubuntu-1/@/boot/initrd.img-3.8.0-19-generic

Update 2:

Grub issue on Ubuntu 13.04 is being worked: https://github.com/zfsonlinux/zfs/issues/1441

6.4 Now install the boot loader to the MBR like this:

# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252)
Installation finished. No error reported.

Do not reboot the computer until you get exactly that result message. Note that you are installing the loader to the whole disk, not a partition.

Note: The readlink is required because recent GRUB releases do not dereference symlinks.

Step 7: Cleanup and First Reboot

7.1 Exit from the chroot environment back to the LiveCD environment:

# exit

7.2 Run these commands in the LiveCD environment to dismount all filesystems:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool

The zpool export command must succeed without being forced or the new system will fail to start.  If you have problems exporting rpool make sure you really did unmount all the file systems above.

1
comments

May 02

Migrate OVM Manager to a Different Server

For whatever reason you ever need to migrate an OVM Manager to a different host this would be helpful. I can't imagine that this would need to happen that often but I recently did need to cut it over from a Virtualbox instance to an OVM Hypervisor. This is the OVM 3.2.1 local mysql(not Oracle XE) DB version.

Find the source UUID:

# more /u01/app/oracle/ovm-manager-3/.config
DBTYPE=MySQL
DBHOST=localhost
SID=ovs
LSNR=49500
OVSSCHEMA=ovs
APEX=8080
WLSADMIN=weblogic
OVSADMIN=admin
COREPORT=54321
UUID=0004fb0000010000434a8179cda0edbd
BUILDID=3.2.1.516

Copy recent backup and stage on new host. You can make a manual mysql backup or I just used the latest Auto backup:

# tar cpf a.tar AutoFullBackup-20130501_201957
# scp a.tar root@172.16.97.155:/root

Some pre-requisites the install would be pointing out:

# grep oracle /etc/group
dba:x:501:oracle

# tail -3 /etc/security/limits.conf
* soft nofile 16384
* hard nofile 65536
# End of file

[root@hovmmanager a]# ulimit -aH
...
open files                      (-n) 65536
...

Install Manager on new host using old UUID:

# pwd
/mnt/a

# ./runInstaller.sh --uuid 0004fb0000010000434a8179cda0edbd
Oracle VM Manager Release 3.2.1 Installer

Oracle VM Manager Installer log file:
/tmp/ovm-manager-3-install-2013-05-02-115510.log

Please select an installation type:
   1: Simple (includes database if necessary)
   2: Custom (using existing Oracle database)
   3: Uninstall
   4: Help

   Select Number (1-4): Please enter a valid value
   Select Number (1-4): 1

Starting production with local database installation ...

Verifying installation prerequisites ...
Unable to ping hostname 'hovmmanager.keste.com'.
*** WARNING: Recommended memory for the Oracle VM Manager server installation using Local MySql DB is 7680 MB RAM
Group 'dba' does not exist, create user 'oracle' with group 'dba' before installing
hardnofiles should be set to 8192 but was 4096

*** OK lets try again. Left above in to show the pre-requisites...

# ./runInstaller.sh --uuid 0004fb0000010000434a8179cda0edbd

Oracle VM Manager Release 3.2.1 Installer

Oracle VM Manager Installer log file:
/tmp/ovm-manager-3-install-2013-05-02-120741.log

Please select an installation type:
   1: Simple (includes database if necessary)
   2: Custom (using existing Oracle database)
   3: Uninstall
   4: Help

   Select Number (1-4): 1

Starting production with local database installation ...

Verifying installation prerequisites ...

One password is used for all users created and used during the installation.
Enter a password for all logins used during the installation:
Enter a password for all logins used during the installation (confirm):

Verifying configuration ...

Start installing the configured components:
   1: Continue
   2: Abort

   Select Number (1-2): 1

Step 1 of 9 : Database Software...
Installing Database Software...
Retrieving MySQL Database 5.5 ...
Unzipping MySQL RPM File ...
Installing MySQL 5.5 RPM package ...
Configuring MySQL Database 5.5 ...
Installing MySQL backup RPM package ...

Step 2 of 9 : Java ...
Installing Java ...

Step 3 of 9 : Database schema ...
Creating database 'ovs' ...
Creating user 'ovs' for database 'ovs'...

Step 4 of 9 : WebLogic ...
Retrieving Oracle WebLogic Server 11g ...
Installing Oracle WebLogic Server 11g ...

Step 5 of 9 : ADF ...
Retrieving Oracle Application Development Framework (ADF) ...
Unzipping Oracle ADF ...
Installing Oracle ADF ...
Installing Oracle ADF Patch...

Step 6 of 9 : Oracle VM  ...
Retrieving Oracle VM Manager Application ...
Extracting Oracle VM Manager Application ...
Installing Oracle VM Manager Core ...

Step 7 of 9 : Domain creation ...
Creating Oracle WebLogic Server domain ...
Starting Oracle WebLogic Server 11g ...
Configuring data source 'OVMDS' ...
Creating Oracle VM Manager user 'admin' ...

Step 8 of 9 : Deploy ...
Deploying Oracle VM Manager Core container ...
Deploying Oracle VM Manager UI Console ...
Deploying Oracle VM Manager Help ...
Granting ovm-admin role to user 'admin' ...
Set Log Rotation ...
Disabling HTTP and enabling HTTPS...
Configuring Https Identity and Trust...
Configuring Weblogic parameters...

Step 9 of 9 : Oracle VM Manager Shell ...
Retrieving Oracle VM Manager Shell & API ...
Extracting Oracle VM Manager Shell & API ...
Installing Oracle VM Manager Shell & API ...

Retrieving Oracle VM Manager Upgrade tool ...
Extracting Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager Upgrade tool ...

Retrieving Oracle VM Manager CLI tool ...
Extracting Oracle VM Manager CLI tool...
Installing Oracle VM Manager CLI tool ...
Copying Oracle VM Manager shell to '/usr/bin/ovm_shell.sh' ...
Installing ovm_admin.sh in '/u01/app/oracle/ovm-manager-3/bin' ...
Installing ovm_upgrade.sh in '/u01/app/oracle/ovm-manager-3/bin' ...
Enabling Oracle VM Manager service ...
Shutting down Oracle VM Manager instance ...
Restarting Oracle VM Manager instance ...
Waiting for the application to initialize ...
Oracle VM Manager is running ...
Oracle VM Manager installed.

Please wait while WebLogic configures the applications... This can take up to 5 minutes.

Installation Summary
--------------------
Database configuration:
  Database type               : MySQL
  Database host name          : localhost
  Database name               : ovs
  Database listener port      : 49500
  Database user               : ovs

Weblogic Server configuration:
  Administration username     : weblogic

Oracle VM Manager configuration:
  Username                    : admin
  Core management port        : 54321
  UUID                        : 0004fb0000010000434a8179cda0edbd

Passwords:
There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI:
  https://vmmanager.domain.com:7002/ovm/console
Log in with the user 'admin', and the password you set during the installation.

Please note that you need to install tightvnc-java on this computer to access a virtual machine's console.

For more information about Oracle Virtualization, please visit:
  http://www.oracle.com/virtualization/

Oracle VM Manager installation complete.

Please remove configuration file /tmp/ovm_configBltiZ5.

Restore DB:
[bash]
# pwd
/u01/app/oracle/mysql/dbbackup

# mv /root/AutoFullBackup-20130501_201957/ ./

# /etc/init.d/ovmm stop
Stopping Oracle VM Manager                                 [  OK  ]
# /etc/init.d/ovmm_mysql stop
Shutting down OVMM MySQL..

# su - oracle
$

$ bash /u01/app/oracle/ovm-manager-3/ovm_shell/tools/RestoreDatabase.sh AutoFullBackup-20130501_201957
INFO: Expanding the backup image...
INFO: Applying logs to the backup snapshot...
INFO: Restoring the backup...
INFO: Success - Done!
INFO: Log of operations performed is available at: /u01/app/oracle/mysql/dbbackup/AutoFullBackup-20130501_201957/Restore.log

IMPORTANT:
      As 'root', please start the OVM Manager database and application using:
               service ovmm_mysql start; service ovmm start

$ logout

# /etc/init.d/ovmm_mysql start
Starting OVMM MySQL..                                      [  OK  ]
# /etc/init.d/ovmm start

Comments Off on Migrate OVM Manager to a Different Server
comments