Category: Linux

Apr 12

Nagios on Linux for SPARC

I recently experimented a little with Linux for SPARC(more here https://oss.oracle.com/projects/linux-sparc/) and found it to be surprisingly stable. One of the environments I support is a pure OVM for SPARC environment and no luxury of Linux. So I am running some open source tools like Nagios, HAproxy etc on Solaris. Nagios has worked ok but is painful to compile. There are also some bugs that cause high utilization.

I tried a Linux for SPARC instance and since they are pretty much like RedHat/Oracle/CentOS it means a fair bit of packages already exist. Nagios does not exist so I compiled it. Suffice to say installing dependencies from YUM and compiling was a breeze compared to Solaris.

You can pretty much follow this doc to the letter:
https://assets.nagios.com/downloads/nagioscore/docs/Installing_Nagios_Core_From_Source.pdf

Things to note.
1. By default the firewall does not allow inbound http.

2. If you have permission issues in the web frontend or something like Internal server error you can disable(quick test) and then configure selinux for nagios scripts.

# setenforce 0
# chcon -R -t httpd_sys_content_t /usr/local/nagios

3. Redo plugins with openssl for https checks. I wanted to do https checks.

# yum install openssl-devel
# pwd
/usr/src/nagios/nagios-plugins-2.1.1

# ./configure --with-openssl --with-nagios-user=nagios --with-nagios-group=nagios
[..]
                    --with-openssl: yes
# make
# make install

# /usr/local/nagios/libexec/check_http -H 10.2.10.33 -S -p 215 
HTTP OK: HTTP/1.1 200 OK - 2113 bytes in 0.017 second response time |time=0.016925s;;;0.000000 size=2113B;;;0

I made a https command as follow.

command.cfg
# 'check_https' command definition
define command{
        command_name    check_https
        command_line    $USER1$/check_http -H $HOSTADDRESS$ -S -p $ARG1$
        }

And referenced as follow.

storage.cfg
define service{
        use                             remote-service         ; Name of service template to use
        host_name                       zfssa1
        service_description             HTTPS
        check_command                   check_https!215
        notifications_enabled           0
        }

Comments Off on Nagios on Linux for SPARC
comments

Dec 01

Linux tabbed SSH connection manager

I like to work in a tabbed SSH connection manager. Especially when I have hundred's or thousands of machines to connect to. A connection manager like putty keeps track of machine names and login info. Using a tabbed interface like MTPutty can make your life a whole lot easier with treeview/groups and side by side terminals. And additionally if you can cluster the terminal commands it can be an added bonus.

So far I have not really liked anything in the Linux world as far as a SSH connection manager. Ubuntu does come with putty which seems to work the same as in the Windows world. Best I could find is an application written in Python called Gnome Connection Manager (gcm). In the Ubuntu 15.10 repos the package is called gnome-connection-manager. Be warned of a few things:

1. Seems the code is at least a few years old and the website does not seem to have documentation or any kind of discussion. The code is all python so you can look at fixing and emailing the owner.
2. Be sure to check the paste-right-click in ~/.gcm/gcm.conf. This can be a nasty setting if you did not expect it. I copy and paste a lot between a Windows desktop and a Linux guest and almost accidentally pasted garbage into a critical device.
3. Also check auto-copy-selection if you like putty style behavior where anything selected in your SSH terminal should be in the copy buffer.

I think if the developer put a little bit more love into gnome-connection-manager it would definitely be a keeper and first rate gnome app. I have looked at some other options like hotssh but worth checking out is Remmina. Unfortunately for me Remmina was very buggy.

Comments Off on Linux tabbed SSH connection manager
comments

Oct 30

Network Manager VPN Connections

I have documented previously that the Linux network manager can be used to connect to several different VPN gateways.  There are several network manager plugins available for the different VPN solutions.  The pptp plugin is used frequently but for newer Cisco gateways you should use the network-manager-openconnect-gnome plugin. You should use the network-manager-vpnc plugin to connect to older Cisco gateways.

The vpnc plugin also happens to work for Palo Alto GlobalProtect concentrators.  For the vpnc plugin to work with Palo Alto GlobalProtect gateways you need to:

- Enable X-Auth on your VPN gateway. You will also need the group name and password from the VPN administrator.

- Create a "Cisco compatible" VPN when creating your network manager connection.

 

Comments Off on Network Manager VPN Connections
comments

Oct 09

Fedora 20 Alpha Virtualbox Guest Additions

Just a quick note on getting the guest additions to work on Fedora 20 Alpha. In my setup I was getting the below error when installing VBOXADDITIONS_4.2.18_88780.

...
Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably installed with
yum install kernel-devel-3.11.3-301.fc20.i686+PAE

Building the main Guest Additions module [FAILED]
...

As you can see the suggestion above was wrong.  After installing the correct package I could install ok.

# yum install kernel-PAE-devel.i686

1
comments

Jul 29

Disown and background a Unix process

Ever run a very large job and regretting not starting it in the excellent screen utility? If you don't have something like reptyr or retty, you can do the following.

Push the running job into the background using Control-Z and then background it. Then disown that job from the terminal. At least it will keep running. And if you want to kick off another job when the disowned process finish you can run a little script in a new terminal and checking for the disowned job to finish. Running the new script in screen first off course.

Background and disown process:

# rsync -av /zfsapp/u06/* /backup/u06/
sending incremental file list
temp01.dbf
^Z
[1]+  Stopped                 rsync -av /zfsapp/u06/* /backup/u06/

# bg
[1]+ rsync -av /zfsapp/u06/* /backup/u06/ &

# disown %1

# ps -ef | grep u06
    root 23903 23902   1 07:05:07 pts/5       0:01 rsync -av /zfsapp/u06/temp01.dbf
    root 23901  2656   1 07:05:07 pts/5       0:01 rsync -av /zfsapp/u06/temp01.dbf
    root 23902 23901   0 07:05:07 pts/5       0:00 rsync -av /zfsapp/u06/temp01.dbf

Check for a process id to finish before starting a new job:

# more cp_u06.sh
#!/bin/bash
while ps -p 23903 > /dev/null;
do
 printf "."
 sleep 60;
done
echo
echo "last rsync finished starting new"
rsync -av /zfsapp/u07/* /backup/u07/

Comments Off on Disown and background a Unix process
comments

Jun 23

Virtualbox Guest Additions Linux

If you experience issues with installing the Virtualbox Guest Additions it could be that the build environment, dkms or kernel headers are not installed.

Messages typically look something like the following:

/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again.  Stop

Install the following packages and retry the Guest Additions install:

# apt-get install build-essential dkms linux-headers-generic

Comments Off on Virtualbox Guest Additions Linux
comments

Jun 16

Ssh tunnelling via intermediate host

I recently needed to copy files using scp, while not able to copy directly to the target host.  I had to use an intermediate firewall host.  There is a few ways to get this done and most requires netcat (nc) on the intermediate host for copying.

Keep in mind using -t for just a ssh shell connection will work:

$ ssh -t rrosso@backoffice.domain.com ssh admin@10.24.0.200

If needing scp below is a way to get this done when netcat is not a possibility.

In a new terminal do this (command won't return a prompt and leave the terminal open):

$ ssh rrosso@backoffice.domain.com -L 2000:10.24.0.200:22 -N

In a new terminal ssh as follow:

$ ssh -p 2000 admin@localhost

Scp as follow:

$ scp -P 2000 testfile admin@localhost:/tmp

Sftp also possible:

$ sftp -P 2000 admin@localhost

Update 1:  Above will work fine but you can also consider the following to make things more transparent.

$ vi .ssh/config
Host *
 ServerAliveCountMax 4
 #Note default is 3
 ServerAliveInterval 15
 #Note default is 0
#snip
host work-tunnel
 hostname backoffice.domain.com
 port 22

 # SSH Server
 LocalForward localhost:2000 10.24.0.200:22
 user rrosso

# Aliases as follow
host myhost.domain.com
 hostname localhost
 port 2000
 user admin

Then run the tunnel connect first (use ssh -v while still troubleshooting):

$ ssh work-tunnel

Leave above terminal open to leave tunnel going. And now you can run commands in new terminals with syntax as if no tunnel required.

$ scp testfile myhost.domain.com:/tmp
$ ssh myhost.domain.com

That should do it for a ssh shells.

Example for other ports:

Note you can do a lot of other ports also in similar fashion.  Here is an example you could play with.

Host workTunnel
    Host ssh.domain.com
    Port 5001
    # SMTP Server
    LocalForward localhost:2525 smtp.domain.com:25
    # Corporate Wiki.  Using IP address to show that you can.
    LocalForward localhost:8080 192.168.0.110:8080
    # IMAP Mail Server
    LocalForward locahost:1430  imap.pretendco.com:143
    # Subversion Server
    LocalForward locahost:2222  svn.pretendco.com:22
    # NFS Server
    LocalForward locahost:2049  nfs.pretendco.com:2049
    # SMB/CIFS Server
    LocalForward locahost:3020  smb.pretendco.com:3020
    # SSH Server
    LocalForward locahost:2220  dev.pretendco.com:22
    # VNC Server
    LocalForward locahost:5900  dev.pretendco.com:5900

### Hostname aliases ###
### These allow you to mimic hostnames as they appear at work.
### Note that you don't need to use a FQDN; you can use a short name.
Host smtp.domain.com
    HostName localhost
    Port 2525
Host wiki.domain.com
    HostName localhost
    Port 8080
Host imap.domain.com
    HostName localhost
    Port 1430
Host svn.domain.com
    HostName localhost
    Port 2222
Host nfs.domain.com
    HostName localhost
    Port 2049
Host smb.domain.com
    HostName localhost
    Port 3020
Host dev.domain.com
    HostName localhost
    Port 2220
Host vnc.domain.com
    HostName localhost
    Port 5900

Comments Off on Ssh tunnelling via intermediate host
comments

Jun 11

Ubuntu root on ZFS upgrading kernels

Update 1:

On a subsequent kernel upgrade to 3.8.0.25 I realized that the zfs modules are getting compiled just fine when the kernel is upgraded.  On Ubuntu 13.04 anyhow.  So all you need to do is fix the grub.cfg file because of the bug mentioned below where "/ROOT/ubuntu-1@" is inserted twice.  Quick fix you could use sed but be careful to verify your temporary file before copying in place:

# sed 's/\/ROOT\/ubuntu-1\/@//g' /boot/grub/grub.cfg > /tmp/grub.cfg
# cp /tmp/grub.cfg /boot/grub/grub.cfg

I left the rest of the initial post below in case I ever need to really boot with a live CD and redo kernel modules from scratch.   Or maybe that would also work for upgrading the zfs modules from git I suppose.

Original post below:

This is a follow on to my running Ubuntu on a ZFS root file system http://blog.ls-al.com/booting-ubuntu-on-a-zfs-root-file-system/ article.  I decided to document how to do a kernel upgrade in this configuration.  It serves two scenarios A) user decided kernel upgrade or B) accidentally upgraded the kernel and now I can't boot any longer.  I wish I could say I wrote this article in response to number 1 but no it was in response to number 2.

In this case the kernel was upgraded from 3.8.0.19 to 3.8.0.23.

Boot a live cd to start.

Install zfs module:

$ sudo -i
# /etc/init.d/lightdm stop
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install debootstrap ubuntu-zfs

# modprobe zfs
# dmesg | grep ZFS:
ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Import pool, mount and chroot:

# zpool import -d /dev/disk/by-id -R /mnt rpool
# mount /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252-part1 /mnt/boot/grub

# mount --bind /dev /mnt/dev
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# chroot /mnt /bin/bash --login

Distribution Upgrade:

# locale-gen en_US.UTF-8
# apt-get update
# apt-get dist-upgrade

** At this point remove any old kernels manually if you want.

Fix grub:

# grub-probe /
 zfs

# ls /boot/grub/i386-pc/zfs*
 /boot/grub/i386-pc/zfs.mod /boot/grub/i386-pc/zfsinfo.mod

# update-initramfs -c -k all

# update-grub

# grep boot=zfs /boot/grub/grub.cfg

** Currently a bug in the Ubuntu 13.04 grub scripts. Look at boot lines there is a duplicated string /ROOT/ubuntu-1/@/ROOT/ubuntu-1@/ in there.

According to https://github.com/zfsonlinux/zfs/issues/1441 it can be fixed with grub-install below.  That did not work for me though I fixed it manually.

# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_VBb59e0ffb-68fb0252)
 Installation finished. No error reported.

Unmount and reboot:

# umount /mnt/boot/grub
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# zfs umount -a
# zpool export rpool

2
comments

Apr 12

Display X After User Switch

Sometimes you find yourself having to redirect the X display to a different host but "ssh -X hostname" will not work since you had to switch users. For instance you logged to a host as root and afterwards "su - oracle".

Example:

$ ssh root@host1.domain.com -X
# echo $DISPLAY
localhost:10.0
# xauth list
host1.domain.com/unix:11  MIT-MAGIC-COOKIE-1  95e4b887f2f6d132897aedbbbe297309
host1.domainom/unix:10  MIT-MAGIC-COOKIE-1  961e9e854127e3c70ff8804a5eb57f7e
# su - oracle
$ xauth add host1.domain.com/unix:10  MIT-MAGIC-COOKIE-1  961e9e854127e3c70ff8804a5eb57f7e
xauth:  creating new authority file /home/oracle/.Xauthority

Then trying xclock or xterm worked for me.  If you still have a problem also try:

$ export DISPLAY=localhost:10.0

Comments Off on Display X After User Switch
comments

Feb 07

Curl command line downloads

If you need a command line download on Linux there are several options.  Wget for a simple download is a very good option but I prefer curl as I have had more success when dealing with logins, cookies and uploads than with wget.

Plus if you need to go further and integrate something with python or php the curl libraries are awesome.

Simple download:

$ curl -o myfile.iso "http://server.com/file.iso"

With login:

$ curl -o myfile.iso -u user:password "https://content.server.com/isos/file-x86_64-dvd.iso"
 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
 Dload  Upload   Total   Spent    Left  Speed
 100 3509M  100 3509M    0     0  6204k      0  0:09:39  0:09:39 --:--:-- 7106k

** Note sometimes to get the correct download string you will need to login into the site with your browser and copy the download location of the link.  Or in some cases actually initiate the download with the browser and then copy the link from the browser download window.

If you are after a more permanent web or scripted solution you can use curl to login and save the cookie.  Then subsequently download the file using the generated cookie.  This requires more experimentation with your particular site.

Login and save a cookie:

$ curl -c cookie.txt -u user:password https://secDownload.mybank.com/

List files:

$ curl -b cookie.txt -u user:password https://secDownload.mybank.com/

Upload a file:

$ curl -b cookie.txt --upload-file test.rrosso https://secDownload.mybank.com/

 

If you are behind corporate firewalls you might still be able to use curl.
Behind socks5 proxy:

$ curl --socks5 proxy.domain.com -U rrosso:pwd -c cookie2.txt -u site-user:pwd https://secDownload.mybank.com/ (login and save a cookie)
$ curl --socks5 proxy.domain.com -U rrosso:pwd -b cookie2.txt -u site-user:pwd
https://secDownload.mybank.com/ (list files using saved cookie)
$ curl --socks5 proxy.domain.com -U rrosso:pwd -b cookie2.txt --upload-file test.rrosso https://secDownload.mybank.com

Behind squid proxy:

$ curl -x proxy.domain.com:3128 -U rrosso:pwd -o ARP08110610926072.txt_171317.RECVD -u site-user:pwd https://secDownload.mybank.com/ARP08110610926072.txt_171317.RECVD

$ curl -x proxy.domain.com:3128 -U rrosso:pwd -c cookie2.txt -u site-user:pwd https://secDownload.mybank.com
Virtual user site-user logged in.

$ curl -x proxy.domain.com:3128 -U rrosso:pwd -b cookie2.txt -u site-user:pwd https://secDownload.mybank.com
total 38
...

$ curl -x proxy.domain.com:3128 -U rrosso:pwd -b cookie2.txt -u site-user:pwd --upload-file rrosso-test https://secDownload.mybank.com

Comments Off on Curl command line downloads
comments