Dec 10

Nagios Downtime using a ServiceGroup

This is not a complete script. I was only interested in scheduling a SERVICEGROUP through the Nagios command file (NAGIOSCMD). This is for obvious reasons if you ever used Nagios in large environments. It is painful to schedule and cancel downtimes.

Since I wanted to be able to delete multiple downtime entries and I used a feature added somewhere in 3.x called DEL_DOWNTIME_BY_START_TIME_COMMENT. The cancellation of downtime is done by providing a Start date and a comment.

import sys, argparse, time
from datetime import datetime, timedelta

##  EXAMPLES
## # python nagios_downtime.py --action add --Servicegroup PRDVIPS --begin "2016-12-10 8:36" --duration 10 --author rrosso --comment "Test Scheduling Downtime in Nagios"
## [1481387741.0] SCHEDULE_SERVICEGROUP_SVC_DOWNTIME;PRDVIPs;1481387760.0;1481388360.0;1;0;600;rrosso;Test Scheduling Downtime in Nagios
## # python nagios_downtime.py --action cancel --begin "2016-12-10 8:36" --comment="Test Scheduling Downtime in Nagios"
## [1481387769.0] DEL_DOWNTIME_BY_START_TIME_COMMENT;1481387760.0;Test Scheduling Downtime in Nagios
##

VERSION = "0.4"
VERDATE = "2016-12-10"

NAGIOSCMD =  "/usr/local/nagios/var/rw/nagios.cmd"
now = datetime.now()
cmd = '[' + str(time.mktime(now.timetuple())) + '] '
execline = ''

parser = argparse.ArgumentParser(description='Nagios Downtime Scheduler.')
parser.add_argument('--action', dest='action', help='use add or cancel as action for downtime entries', required=True)
parser.add_argument('--Servicegroup', dest='servicegroup', help ='Schedule Downtime a specific ServiceGroup')
parser.add_argument('--duration', dest='duration', help='Duration of downtime, in minutes.)
parser.add_argument('--begin', dest='begin', help='Beginning of Downtime. ex: 2016-12-10 18:10', required=True)
parser.add_argument('--author', dest='author', default='admin', help='Author: Who is scheduling the downtime?')
parser.add_argument('--comment', dest='comment', help='Comment: Reason for scheduling the downtime.', required=True)
parser.add_argument('--dryrun', action='store_true', help='Dry run.  Do not do anything but show action.')

args = parser.parse_args()

## need some argument checking here.  what is required what conflicts etc..
if (args.action not in ['add','cancel']):
  sys.exit(1)

if (args.begin != None):
  #check for proper format here...
  #beginTime = datetime.datetime(2016,12,8,13,0).strftime('%s')
  beginTime = datetime.strptime(args.begin,'%Y-%m-%d %H:%M')

if (args.action == 'add'):
  if (args.servicegroup):
    cmd = cmd + 'SCHEDULE_SERVICEGROUP_SVC_DOWNTIME;'
    endTime = beginTime + timedelta(minutes=int(args.duration))
    execline=cmd + 'PRDVIPs;' + str(time.mktime(beginTime.timetuple())) + ';' + str(time.mktime(endTime.timetuple())) + ';1;0;' + '600' +';' + args.author + ';' + args.comment + '\n'

if (args.action == 'cancel'):
  cmd = cmd + 'DEL_DOWNTIME_BY_START_TIME_COMMENT;'
  execline=cmd + str(time.mktime(beginTime.timetuple())) + ';' + args.comment + '\n'

print 'Nagios CMD interaction will be: ' + execline

if (args.dryrun):
  print "Note that this is a dry run ie so not committing transaction"
else:
  print "Note that this is not a dry run ie --dryrun was not used so committing transaction"
  f = open(NAGIOSCMD,'w')
  f.write(execline)

Comments Off on Nagios Downtime using a ServiceGroup
comments

Dec 06

Unable to negotiate ssh-dss

openssh version deprecated DSA keys by default. To work around it:

Update in ~/.ssh/config:

Host your-host
    HostkeyAlgorithms +ssh-dss

or

ssh -oHostKeyAlgorithms=+ssh-dss user@host

Comments Off on Unable to negotiate ssh-dss
comments

Nov 01

Docker Test Environment Variable

I did a simple test of how to utilize environment variables in docker images and recored a few notes here.

 

Install Docker on a Ubuntu 16.04.1 Virtualbox guest for this test.
https://docs.docker.com/engine/installation/linux/ubuntulinux/

root@docker:~# apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

root@docker:~# echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list
deb https://apt.dockerproject.org/repo ubuntu-xenial main

root@docker:~# apt-get update
root@docker:~# apt-cache policy docker-engine

root@docker:~# apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Simple docker test.

root@docker:~#  docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Fix docker permissions so normal user can use.

 
root@docker:~# groupadd docker
groupadd: group 'docker' already exists
root@docker:~# usermod -aG docker rrosso

rrosso@docker:~$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Make sure docker runs at OS boot time.

 
root@docker:~# systemctl enable docker
Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable docker

Test simple python web app.
https://docs.docker.com/engine/tutorials/usingdocker/

rrosso@docker:~$ docker run -d -P training/webapp python app.py
Unable to find image 'training/webapp:latest' locally
latest: Pulling from training/webapp
e190868d63f8: Pull complete 
909cd34c6fd7: Pull complete 
0b9bfabab7c1: Pull complete 
a3ed95caeb02: Pull complete 
10bbbc0fc0ff: Pull complete 
fca59b508e9f: Pull complete 
e7ae2541b15b: Pull complete 
9dd97ef58ce9: Pull complete 
a4c1b0cb7af7: Pull complete 
Digest: sha256:06e9c1983bd6d5db5fba376ccd63bfa529e8d02f23d5079b8f74a616308fb11d
Status: Downloaded newer image for training/webapp:latest
331df8667f005e40555944b7e61108525e843b6262275808f016695aacd7fc67

rrosso@docker:~$ docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
331df8667f00        training/webapp     "python app.py"     34 seconds ago      Up 33 seconds       0.0.0.0:32768->5000/tcp   focused_jang

rrosso@docker:~$ docker inspect focused_jang
[
    {
        "Id": "331df8667f005e40555944b7e61108525e843b6262275808f016695aacd7fc67",
        "Created": "2016-10-31T21:18:32.596139908Z",
        "Path": "python",
        "Args": [
            "app.py"
        ],
        "State": {
            "Status": "running",
[.. snip ..]
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02"
                }
            }
        }
    }
]

rrosso@docker:~$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' focused_jang
172.17.0.2

Try an environment variable.

 
rrosso@docker:~$ docker run -e "PROVIDER=app1" -d -P training/webapp python app.py
c300d87ef0a9ad3ca5f3c40fd5ae7d54f095ada52d1f092e991a2271652b573b
rrosso@docker:~$ docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
c300d87ef0a9        training/webapp     "python app.py"     11 seconds ago      Up 10 seconds       0.0.0.0:32770->5000/tcp   sleepy_varahamihira

Browser response: http://192.168.1.134:32770/
Hello app1!

Make a change to app.py. Play with restart/commit etc to understand volatile vs non-volatile changes.

 
rrosso@docker:~$ docker stop sleepy_varahamihira
sleepy_varahamihira
rrosso@docker:~$ docker run -t -i training/webapp /bin/bash
root@3c94c01cc795:/opt/webapp# vi app.py 
root@3c94c01cc795:/opt/webapp# cat app.py 
import os

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    provider = str(os.environ.get('PROVIDER', 'world'))
    return 'Riaan added '+provider+'!'

if __name__ == '__main__':
    # Bind to PORT if defined, otherwise default to 5000.
    port = int(os.environ.get('PORT', 5000))
    app.run(host='0.0.0.0', port=port)
root@3c94c01cc795:/opt/webapp# exit

rrosso@docker:~$ docker run -e "PROVIDER=app1" -d -P training/webapp python app.py
8116836ea65f7254e81671a58b68cf263b42427b066a1c3cfc971c70303b614d

rrosso@docker:~$ docker stop evil_colden
evil_colden

rrosso@docker:~$ docker run -e "PROVIDER=app1" -d -P training/webapp python app.py
190207e1eadad71205256a5127de982ad837dbad5a41896546ba480e2416ba20

rrosso@docker:~$ docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
190207e1eada        training/webapp     "python app.py"     8 seconds ago       Up 7 seconds        0.0.0.0:32772->5000/tcp   condescending_wilson

rrosso@docker:~$ docker stop condescending_wilson
condescending_wilson

rrosso@docker:~$ docker run -t -i training/webapp /bin/bash
root@f6dd40e4a838:/opt/webapp# vi app.py 
root@f6dd40e4a838:/opt/webapp# exit

rrosso@docker:~$ docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                     PORTS               NAMES
f6dd40e4a838        training/webapp     "/bin/bash"         About a minute ago   Exited (0) 5 seconds ago                       determined_cray

rrosso@docker:~$ docker commit f6dd40e4a838 training/webapp
sha256:8392632ac934525ae846d6a2e0284a52de7cbfa1a682bf8a9804966c5c3e15c9

rrosso@docker:~$ docker run -e "PROVIDER=app1" -d -P training/webapp python app.py
594d6edb3ab7400ae3c38584c0d48276f24973a4328c3f60c1b1bc4cce16f17a

rrosso@docker:~$ docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
594d6edb3ab7        training/webapp     "python app.py"     5 seconds ago       Up 4 seconds        0.0.0.0:32773->5000/tcp   focused_hawking

Check changes in browser: http://192.168.1.134:32773/
Riaan added app1!

NOTES/IDEAS:

http://stackoverflow.com/questions/30494050/how-to-pass-environment-variables-to-docker-containers

Question:
# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string

Answer:
You can pass environment variables to your containers with the -e flag.
example from a startup script

sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \ 
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \  
--name container_name dockerhub_id/image_name

Comments Off on Docker Test Environment Variable
comments

Oct 15

Amazon SES Submission

I have experimented using the Amazon API to submit email to SES. As opposed to using SMTP the API may provide some advantages in some situations.

I jotted down some notes how I used the API and CommandPool since it took me a while to get it to work.

Out of scope:
1. SES and DKIM setup.
2. Requesting/Upping 1 email/sec SES rate limits.
3. SES sandbox and TO/FROM approvals.
4. Environment setup ie PHP 7/Nginx.
5. KEY/SECRET obfuscation from script ie store in hidden home folder.
6. API v1 and v2 versus v3 changes.

Input file so I can pass to CURL for testing.

$ cat emails.inp 
[
{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"},{"email":"success@simulator.amazonses.com"}
]

Run it from curl.

$ curl -H "Content-Type: application/json" -X POST --data-binary @emails.inp http://myserver.com/ses/amazon-commandpool_v2.php
About to send item 0 to: success@simulator.amazonses.com
About to send item 1 to: success@simulator.amazonses.com
About to send item 2 to: success@simulator.amazonses.com
About to send item 3 to: success@simulator.amazonses.com
About to send item 4 to: success@simulator.amazonses.com
Completed 0: 01000157c88196ba-e4a8d71a-c2a3-4d58-951a-e43639f29e05-000000 and result was: 200 
About to send item 5 to: success@simulator.amazonses.com
Completed 2: 01000157c88196b8-4b4bb9c4-a5af-4cd3-ad0c-808817910d12-000000 and result was: 200 
About to send item 6 to: success@simulator.amazonses.com
Completed 1: 01000157c88196b9-33fb8303-f7b5-451e-acd3-619b239053e6-000000 and result was: 200 
About to send item 7 to: success@simulator.amazonses.com
Completed 3: 01000157c88196b6-9a5d8224-441a-4f12-8b80-137716e7e11c-000000 and result was: 200 
About to send item 8 to: success@simulator.amazonses.com
Completed 4: 01000157c88196ba-c8cebab9-4397-490c-b425-10dcda4b99d4-000000 and result was: 200 
About to send item 9 to: success@simulator.amazonses.com
Completed 5: 01000157c881972b-3e243162-1eb4-4ad3-8c39-16ce31e10144-000000 and result was: 200 
Completed 8: 01000157c881975b-64270cf4-9666-408b-b16f-510a6cb4ff70-000000 and result was: 200 
Completed 7: 01000157c881974d-a2f67bfb-6200-44be-9164-16d099ca6dcf-000000 and result was: 200 
Completed 6: 01000157c8819734-8b3c6347-4a10-44b3-bd59-4e75fc7a2e9f-000000 and result was: 200 
Completed 9: 01000157c88197bc-574976c4-22e0-41ef-a5b6-1e810b666b58-000000 and result was: 200 

Total Execution Time: 0.44663500785828 Sec(s)

Script to accept emails in JSON and use the CommandPool API for asynchronous promises.

# cat amazon-commandpool_v2.php
<?php $time_start = microtime(true); if(strcasecmp($_SERVER['REQUEST_METHOD'], 'POST') != 0){ //throw new Exception('Request method must be POST!'); exit('Request method must be POST!'); } //Make sure that the content type of the POST request has been set to application/json $contentType = isset($_SERVER["CONTENT_TYPE"]) ? trim($_SERVER["CONTENT_TYPE"]) : ''; if(strcasecmp($contentType, 'application/json') != 0){ //throw new Exception('Content type must be: application/json'); exit('Content type must be: application/json'); } //Receive the RAW post data. $content = trim(file_get_contents("php://input")); //Attempt to decode the incoming RAW post data from JSON. $decoded = json_decode($content, true); //If json_decode failed, the JSON is invalid. if(!is_array($decoded)){ //throw new Exception('Received content contained invalid JSON!'); exit('Received content contained invalid JSON!'); } require '/sites1/myserver.com/web/ses/aws-autoloader.php'; use Aws\Exception\AwsException; use Aws\Ses\SesClient; use Aws\CommandPool; use Aws\CommandInterface; use Aws\ResultInterface; use GuzzleHttp\Promise\PromiseInterface; define('REGION','us-east-1'); $client = SesClient::factory([ 'version'=> 'latest',
    'region' => REGION,
    'credentials' => [
      'key'    => 'MYKEY',
      'secret' => 'MYSECRET',
    ]
]);

define('SENDER', 'myfromaddress@mydomain.com');
//define('RECIPIENT', array('success@simulator.amazonses.com','success@simulator.amazonses.com'));
//define('RECIPIENT', array('success@simulator.amazonses.com'));
define('SUBJECT','Amazon SES test (AWS SDK for PHP)');
define('BODY','This email was sent with Amazon SES using the AWS SDK for PHP.');

//$addresses = RECIPIENT;
$addresses = array_column($decoded, 'email');

$commandGenerator = function ($addresses) use ($client) {
    foreach ($addresses as $address) {
        // Yield a command that will be executed by the pool.
	$request = array();
	$request['Source'] = SENDER;
	$request['Destination']['ToAddresses'] = array($address);
	$request['Message']['Subject']['Data'] = SUBJECT;
	$request['Message']['Body']['Text']['Data'] = BODY;
        yield $client->getCommand('SendEmail', 
            $request
        );
    }
};

$commands = $commandGenerator($addresses);

$pool = new CommandPool($client, $commands, [
  'concurrency' => 5,
  'before' => function (CommandInterface $cmd, $iterKey) {
	$a = $cmd->toArray();
        echo "About to send item {$iterKey} to: " 
            . $a['Destination']['ToAddresses'][0] . "\n";
            //. print_r($cmd->toArray(), true) . "\n";
    },
  'fulfilled' => function (
        ResultInterface $result,
        $iterKey,
        PromiseInterface $aggregatePromise
    ) {
        echo "Completed {$iterKey}: {$result['MessageId']} and result was: {$result['@metadata']['statusCode']} \n";
        //echo "Completed {$iterKey}: {$result}\n";
    },
  'rejected' => function (
        AwsException $reason,
        $iterKey,
        PromiseInterface $aggregatePromise
    ) {
        echo "Failed {$iterKey}: {$reason}\n";
    },

]);

// Initiate the pool transfers
$promise = $pool->promise();

// Force the pool to complete synchronously
$promise->wait();

// Or you can chain then calls off of the pool
//$promise->then(function() { echo "Done\n" };

$time_end = microtime(true);
//dividing with 60 will give the execution time in minutes other wise seconds
//$execution_time = ($time_end - $time_start)/60;
$execution_time = ($time_end - $time_start);
//execution time of the script
echo "\nTotal Execution Time: ".$execution_time." Sec(s)\n";

Test from a web page as opposed to CURL command line.

# cat amazon-commandpool_v2_test.php
<?php //API Url $url = 'http://myserver.com/ses/amazon-commandpool_v2.php'; //Initiate cURL. $ch = curl_init($url); //The JSON data. $jsonData = array( array('email' => 'success@simulator.amazonses.com'),
    array('email' => 'success@simulator.amazonses.com'),
    array('email' => 'success@simulator.amazonses.com')
);
 
//Encode the array into JSON.
$jsonDataEncoded = json_encode($jsonData);
 
//Tell cURL that we want to send a POST request.
curl_setopt($ch, CURLOPT_POST, 1);
 
//Attach our encoded JSON string to the POST fields.
curl_setopt($ch, CURLOPT_POSTFIELDS, $jsonDataEncoded);
 
//Set the content type to application/json
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json')); 
 
//Execute the request
$result = curl_exec($ch);
?>

Comments Off on Amazon SES Submission
comments

Oct 12

SFTP Containment Solaris 10

Using the SSH match directive it is possible to contain a user to an isolated folder.

This article is how to get this done on Solaris 10. Of course using a more up to date version of Solaris is preferable but in this case Solaris 10 is required for the application workload.

Your mileage may vary and you could probably simplify this slightly. For us our /apps tree can't be owned by root and also we have several apps nodes so we did it this way so all apps nodes see the uploaded files.

For containing end users to an isolated folder the following must be true.

1. SSH version new enough to allow "match" configs. Solaris 10 needs patching for new enough SSHD.

2. In our case SFTP containment to a path under our /apps tree is not possible since the top level need to be root user owned.

3. To accommodate above we create /opt/svcaccxfr and then lofs/bind mount /opt/svcaccxfr -> /apps/ebs11i/appltop/xxnp/11.5.0/interfaces/svcaccxfr

4. Ensure the permissions is correct under the svcaccxfr folder. The uploads folder need to be set correct for user and group and chowned 775. In our case this was set from a DB node which mounts the whole /apps folder as NFSv3. When /apps is NSFv4 like we use on the apps nodes you may have issues setting perms.

5. We also needed to se an exception in our clone process to flag /apps/ebs11i/appltop/xxnp/11.5.0/interfaces/svcaccxfr as root:root. Our clone process was setting the whole /apps recursively to the apps user and group. root ownership is a requirement for SFTP match.

# ssh -V
Sun_SSH_1.1.7, SSH protocols 1.5/2.0, OpenSSL 0x1000113f

# grep svcaccxfr /etc/passwd 
svcaccxfr:x:403:340:Accounting xfr sftp account:/opt/svcaccxfr:/bin/false

# tail -10 /etc/ssh/sshd_config
Match User svcaccxfr
  #ChrootDirectory /apps/ebs11i/appltop/xxnp/11.5.0/interfaces/svcaccxfr
  ChrootDirectory /opt/svcaccxfr
  AllowTCPForwarding no
  X11Forwarding no
  ForceCommand internal-sftp -u 017 -l info

# ls -l /apps/ebs11i/appltop/xxnp/11.5.0/interfaces/svcaccxfr
total 3
drwxrwxr-x   2 ebsppe_a ebsppe         4 Oct 11 14:14 uploads

# ls -l /apps/ebs11i/appltop/xxnp/11.5.0/interfaces/ | grep svcaccxfr
drwxr-xr-x   3 root     root           3 Oct 11 12:38 svcaccxfr

# grep svcacc /etc/vfstab
## Special lofs/bind mount for SFTP containment svcaccxfr
/apps/ebs11i/appltop/xxnp/11.5.0/interfaces/svcaccxfr - /opt/svcaccxfr  lofs    -       yes      -

# ls -l /opt | grep svcaccxfr
drwxr-xr-x   3 root     root           3 Oct 11 12:38 svcaccxfr

# ls -l /opt/svcaccxfr
total 3
drwxrwxr-x   2 ebsppe_a ebsppe         4 Oct 11 14:14 uploads

Comments Off on SFTP Containment Solaris 10
comments

Aug 22

Save WeChat Video Clip

To save a video to your computer is not as easy in WeChat as one may think. In WhatsApp this is easier since files are also local.

In WeChat I did the following:

1. Open web.wechat.com. I used Firefox 47.0.1
2. Scan IR code from WeChat on Iphone to login
3. Forward the video to your own ID so it shows up in Wechat Web
4. Right Click and Play Video
5. Right Click and Save Video

** For me IE 11, Chrome(several operating systems) did not work. Most of them saved 0 byte files.

Comments Off on Save WeChat Video Clip
comments

Aug 15

PAC Manager Double Click Selection

I have been very happy with PAC as a terminal/SSH manager but the selection always bugged me. Just noticed it has a configuration option that I am going to try.

https://sourceforge.net/p/pacmanager/discussion/1076054/thread/5bed3904/

Select-by-word characters:
From: \.:_\/-A-Za-z0-9
To: -_.:\/A-Za-z0-9

Update: 12/12/16
Changed to: -_.:\/A-Za-z0-9

Comments Off on PAC Manager Double Click Selection
comments

Jun 27

Linux for SPARC Boot Issue

I am running an Oracle Linux for SPARC ldom and had a couple boot issues recently. This may help getting past boot issues.

First issue was because I had a cdrom attached to the ldom and that path was not valid. Like unmounted NFS path for example. That caused a kernel dump and per the development list they will take this as a bug and fix in a future update. The workaround of course was simple once I figured out what was choking. Just remove invalid disk attachment.

The second issue was much more tricky and not until I spotted some selinux message on kernel panics did I realize some recent change I made with selinux profiles must have caused a missing config. The fix is to disable selinux but that was not as easy as I thought. Here is what I did and it may help someone else trying to pass kernel bootup parameters.
1. disable auto-boot

# ldm set-var auto-boot\?=false linuxsparc_ldom

2. Get into the boot prompt. This was the tricky part because the linux kernel started booting as soon as I type boot or boot disk. Plus either I did not have enough time before Silo boots the kernel or it is having an issue with normal Esc or Shift keystrokes to pause bootup. I am not sure but it kept booting whatever keystrokes I tried. What I ended up doing is using "-s". I did "boot -s" which in normal SPARC world means it will boot the kernel in single user mode. I did not really expect openboot to pass single user to linux kernel boot but at least it stops then at boot prompt.

{0} ok boot disk -s
Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args: -s
SILO Version 1.4.14 - Rel: 4.0.18.el6
\
Welcome to Linux for SPARC!
Hit <TAB> for boot options
Your imagename `-s' and arguments `' have either wrong syntax,
or describe a label which is not present in silo.conf
Type `help' at the boot: prompt if you need it and then try again.
boot: 
4.1.12-32.el6uek.sparc64  linux-uek                

3. Now boot the linux kernel with selinux=0

boot: 4.1.12-32.el6uek.sparc64 selinux=0
Allocated 64 Megs of memory at 0x40000000 for kernel
Loaded kernel version 4.1.12
Loading initial ramdisk (25972306 bytes at 0

Comments Off on Linux for SPARC Boot Issue
comments

Jun 23

Solaris Find Process Id tied to IP Address

Recently I needed to find out who is connecting to an Oracle database and at the same time I wanted to see the load the specific connection add to the CPU. So in short I needed IP Address and Port tied to a Unix Pid.

I wrote this quick and dirty python script.

#!/usr/bin/python
import subprocess

## No doubt you would want to exclude some non local or expected IP addresses
excludeIPs="10.2.16.86|10.2.16.62|10.2.16.83|\*.\*"

p = subprocess.Popen("/usr/bin/netstat -an | grep 1521 | awk '{print $2}' | egrep -v '" + excludeIPs + "'", stdout=subprocess.PIPE, shell=True)
nonlocals= p.stdout
 
if nonlocals <> '':
  p = subprocess.Popen("pfiles `ls /proc` 2>/dev/null", stdout=subprocess.PIPE, shell=True)
  try:
    outs, errs = p.communicate()
  except TimeoutExpired:
    p.kill()
    outs, errs = p.communicate()

  pfiles = outs

  for line in nonlocals:
    line=line.strip()
    (IP,port) = line.rsplit('.',1)
    print ("Going to find PID for connection with IP %s and port %s" % (IP,port) )

    for line in pfiles.splitlines():
      if line[:1].strip() <> '':
        pid = line
      if "port: " + port in line:
        print pid

I plan to enhance this script a little bit but for now it did exactly what I needed.

Comments Off on Solaris Find Process Id tied to IP Address
comments

Jun 21

SSH Connection Manager

I previously wrote a quick post on using a connection manager in Linux. Link here:

Linux tabbed SSH connection manager

I have used for the most part something called the Gnome Connection Manager. However it is poorly maintained and had a few small annoyances also.

I revisited a utility called PAC Manager (link here https://sourceforge.net/projects/pacmanager/).

So far it does pretty much everything I need as far as maintaining details for server names and SSH login information. It does have tabbed windows, organize in groups and an amazing number of customization features. It also integrates pretty nicely with KeePass to maintain passwords with.

It would be better if the main distros include this tool but it does at least have .deb and .rpm packages.

I also gave a current version of Remmina another try as it seems best maintained of the bunch but it still gave me unexpected behavior. Like a SSH window just disappearing etc.

Comments Off on SSH Connection Manager
comments