Monday 15 December 2014

Java classes in heap histo

below is a small table to decode class names like [C, [B and [Lclassnames:


Element Type        Encoding
boolean             Z
byte                B
char                C
class or interface  Lclassname;
double              D
float               F
int                 I
long                J
short               S 


This came in handy from stack overflow :)

Monday 10 November 2014

Some useful scriptets

This one is a one liner to collect memory info from a set of servers.
Uses the command seq -w which pads the numbers with leading 0 to follow naming convention.
is simpler than the form for ((i=1;i<16;i++)) which will not pad the numbers.

for i in `seq -w 01 16`;do echo "sheif_prd${i} Mem=  `ssh -q sherif@sheif_prd$i free|grep Mem|tr -s " " |cut -d " " -f2`" ; done

This uses to be useful to collect simple info like memory or CPU or such things.


Another one is for fixing SQLFire backups taken by sqlf -file option.
the backup is done using this script: 

RunSQLFire_sqlscript.sh:

source ${HOME}/.bash_profile

if [ "x${1}" = "x" ]
then
    echo "missing SQL script"
    exit 1
fi


${HOME}/sqlfire/current/bin/sqlf run -client-bind-address=${HOSTNAME} -client-port=1527 -user=prd -password=prd -file=${1}


Then backup is taken in the following format:

PRMPT-$ head table_data
sqlf> SELECT * FROM "PRD"."TABLE";
Part_number                   |Serial                        |Type                      |ALLOWED            
----------------------------------------------------------------------------------------------------
4028227                       |6542883                       |E                             |1           
11256276                      |14533312                      |P                             |0     
3798130                       |18648804                      |C                             |0          
14000630                      |574577                        |E                             |1         
2059762                       |18986693                      |E                             |1  
       



The below AWK one liners converts the above into SQL inserts:

cat table_data |tr -s " " |awk -F"|" '{print("\x27"  $1 "\x27" "," "\x27" $2 "\x27" "," "\x27" $3 "\x27" ","  $4  "," "{ts  \x27" $5 "\x27 }");}' |sed -e "s/ '/'/g" >table_data_fixed

Note the use of ASCII for " ' "  as it was difficult to print it from within AWK.

cat  table_data_fixed |sed -e's/^/INSERT INTO TABLE (Part_number,Serial,Type,ALLOWED) VALUES (/g' |sed -e 's/$/);/g' >table_data_inserts.sql


 


Friday 24 October 2014

Lynx usage in shell scripts

Lynx proved to be a very useful tool for avoiding complex shell scripts around html based reports and logs.
check the below command:

/apps/index/DRE_analysis_scripts/lynx -cfg=/apps/index/DRE_analysis_scripts/lynx.cfg -dump "http://dreprd2:9000/?action=getrequestlog&openlinks=on&refresh=0&tail=1000" >${OUTDIR}/DRE20_${TSTAMP}.txt 

The above shell snippet was useful for analyzing autonomy idol DRE server web based search transaction logs.
Those logs are not dumped to file with same format and i liked the web based format more than the standard content component logging.

Lynx helped me use this elegant log format from command line and apply standard shell filters like grep, cut etc. to it.

Lynx proved to be very handy too :)


Thursday 28 August 2014

Converting variable base DS info into JNDI

This was quite an issue for me since a while.
There are 3 ways to put in the Data source info into a tomcat.
1- put it in context.xml directly which will save time and effort.
2- put into server.xml as global naming resources and then add an entry into context.xml for global resource link
3- put as variables into catalina.properties and then link them in server.xml and context.xml which is the longest way !

below is a script that changes the variable notation to the server.xml notation.
Also will print out the resource link for context.xml:

below is how the ds info got from catalina.properties look like:

[root@Beren DS]# head ds_info
aaa.conf.datasource1.name=configDataSource

aaa.conf.datasource1.url=jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = orcl_host)(PORT = 1527))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = orcl)))
aaa.conf.datasource1.username=sherif
aaa.conf.datasource1.password=mydspassword

aaa.conf.datasource2.name=tstDataSource
aaa.conf.datasource2.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl_host)(PORT=1700))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=test)))
aaa.conf.datasource2.username=test
aaa.conf.datasource2.password=test
[root@Beren DS]#


Below is the script:

[root@Beren DS]# cat fix.sh                             
#set -x                                                 
OLD_IFS=${IFS}                                          
IFS='                                                   
'                                                       
for line in `cat ds_info`                               
do                                                      
        VAR=`echo ${line}|cut -d"=" -f1|cut -d"." -f4`  
        VAL="`echo ${line}|cut -d"=" -f2-`"             

        if [ "${VAR}" = "name" ]
        then                   
                echo '<Resource name="jdbc/'"${VAL}"'"'
                echo 'auth="Container" type="javax.sql.DataSource"'
        fi                                                        
        if [ "${VAR}" = "url" ]                                   
        then
                echo 'url="'"${VAL}"'"'
                echo 'driverClassName="oracle.jdbc.driver.OracleDriver"'
                echo 'maxActive="80"'
                echo 'maxIdle="20"'
                echo 'maxWait="10000"'
        fi
        if [ "${VAR}" = "username" ]
        then
                echo 'username="'"${VAL}"'"'
        fi
        if [ "${VAR}" = "password" ]
        then
                echo 'password="'""${VAL}'"'
                echo 'removeAbandoned="true"'
                echo 'removeAbandonedTimeout="60"'
                echo 'logAbandoned="true"'
                echo 'validationQuery="SELECT SYSDATE FROM DUAL"/>'
                echo " "

        fi
        #read
done

echo ""
echo "Context.xml"
echo ""

for line in `cat ds_info|grep "emc.conf.datasource.*\.name"`
do
        VAR=`echo ${line}|cut -d"=" -f1|cut -d"." -f4`
        VAL="`echo ${line}|cut -d"=" -f2-`"

        if [ "${VAR}" = "name" ]
        then
                echo ' <ResourceLink name="jdbc/'${VAL}'" global="jdbc/'${VAL}'" type="javax.sql.DataSource"/>'
        fi
        #read
done
[root@Beren DS]#

Tuesday 5 August 2014

Changing JVM options using a shell script

The below script is used to customize a TC sever setenv.sh for multiple TC instances.
The script reads a config file and applys this to the setenv and pushes it to the instace loaction.


[sherif@khofo05 ~ ]$ cat fixSetenv.sh
OLD_IFS=${IFS}
IFS='
'
for INST in `cat tc_nodes |grep -v "^#"`
do 
    USER=`echo ${INST} |cut -d":" -f1`
    TC_INST=`echo ${INST} |cut -d":" -f2|cut -d"@" -f1`
    HOST=`echo ${INST} |cut -d":" -f2| cut -d "_" -f1`
    HTTP_PORT=`echo ${INST} |cut -d "_" -f2|cut -d"@" -f1`
    HOME_DIR=`echo ${INST} |cut -d"@" -f2`
    JVM_PAR=`echo ${INST} |cut -d"@" -f3`
    NEW_JAVA_HOME="JAVA_HOME=${HOME_DIR}/springsource/jdk1.7.0_21"

     OLD_JVM_PAR=`grep "JVM_OPTS=" setenv.sh|tr -d '\n'`
    OLD_JAVA_HOME=`grep "JAVA_HOME=" setenv.sh|tr -d '\n'`

IFS=${OLD_IFS}
#echo "---------------"
echo "working on ${HOST}"
echo "sed -- 's%${OLD_JVM_PAR}%${JVM_PAR}%g' setenv.sh |" >sed_cmd
echo "sed -- 's%${OLD_JAVA_HOME}%${NEW_JAVA_HOME}%g'" >>sed_cmd
echo "creating custome setenv "
bash ./sed_cmd >setenv.sh_${TC_INST}
scp setenv.sh_${TC_INST}  ${USER}@${HOST}:${HOME_DIR}/springsource/vfabric-tc-server-standard-2.9.2.RELEASE/${TC_INST}/bin/setenv.sh
echo "sent to ${TC_INST}
Done"
done
IFS=${OLD_IFS}
[
sherif@khofo05 ~ ]$

The instances config file is named tc_nodes.
This file looks like this:

username1:hostname01_8080@/home/dir/of/username1@JVM_OPTS="-server -Xms4096m -Xmx4096m -Xss192k -XX:MaxPermSize=512m -XX:PermSize=256m -XX:ParallelGCThreads=2 -XX:NewSize=256m -XX:MaxNewSize=256m -Xnoclassgc -XX:SurvivorRatio=14 -Xloggc:${HOME}/logs/gc.log"

This is parsed for the parameters used by the script.
I found it more difficult to run the sed commads from within the same script. so to solve this the fixsetenv.sh creates another script and runs it.
this is the sed_cmd script.
this one looks like this:

[sherif@khofo05 ~]$ cat sed_cmd
sed -- 's%JVM_OPTS="-Xmx512M -Xss256K"%JVM_OPTS="-server -Xms4096m -Xmx4096m -Xss192k -XX:MaxPermSize=512m -XX:PermSize=256m -XX:ParallelGCThreads=2 -XX:NewSize=256m -XX:MaxNewSize=256m -Xnoclassgc -XX:SurvivorRatio=14 -Xloggc:${HOME}/logs/gc.log"%g' setenv.sh |
sed -- 's%JAVA_HOME="/homedire1/springsource/jdk1.7.0_21"%JAVA_HOME=/
homedire1/springsource/jdk1.7.0_21%g'
[sherif@khofo05 ~]$


The above made it possible to do custom changes based on the config files.
In order to come around parsing the whole JVM_OPTs with the spaces and things like that I had to change the Internal field separator.
This made it possible to accommodate the spaces.


Automation is fun :)

Monday 26 May 2014

SSL Server cert import for Java apps

The below is to document the procedures to import Server certs into the JVM.
This is to avoid exceptions like the below one:


Exception Message: sun.security.validator.ValidatorException: PKIX path building failed

The above is specifically showing an issue with the certification path that should be included in the JVM cacerts.
For info about the JVM certificate files please check this URL :
http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#X509TrustManager

Steps :

1- Acquire the needed server certificate from the server URL if possible by exporting it from Firefox.
Please make sure you are getting all the certificate path exported.
This would mean that you may need to export root cert, intermediate cert and the leaf server cert.
Would be better to obtain the cert from the server support team, would be much better if possible.

2-Import Certs:
 /opt/jdk1.6.0_26/bin/keytool -keystore cacerts -storepass changeit -import -trustcacerts -v -alias RSAV2 -file /tmp/RSAV2.cer

The above inserts an intermediate cert signed by RSA cert. authority and importing it with the user defined alias  RSAV2.

You might need to import all the certs defined in the server cert path.

3- List Certs:
/opt/jdk1.6.0_26/bin/keytool -keystore cacerts -storepass changeit -list
You should be able to see the certs that you have imported in the list along with the date the cert was imported.

 Also below is a very useful link for doing SSL cert and key debuging in case you are setting up the server and creating a certificate.

http://www.sslshopper.com/article-most-common-openssl-commands.html




Tuesday 20 May 2014

Clustering issues on SQLFire and RabbitMQ

I have been seeing much clustering issues in the last months on both RabbitMQ and SQLFire and both are Pivotal products which are opensource.
Seems like both products have issues with network latency that would cause Split brain issues in the cluster and could lead to potential data loss.

In order to be able to tell when such issues happen i have used the following approches to be able to tell if we have a cluster issue:

1- Integrate Hypric monitoring with SQLFire & RabbitMQ components.
2- For SQLFire, we can make use of the following system query:

 cat get_members.sql
select ID,KIND from sys.members order by KIND;

Running this query with commandline :
{HOME}/sf/sqlf run -client-bind-address=${HOSTNAME} -client-port=1527 -user=myapp -password=myapp -file=get_members.sql

Parsing this output would allow knowing the current number of cluster members.
if any split happens the output of this query will be differant.

3- For RabbitMQ, used a more radical way to do the monitoring.
RabbitMQ nodes will be always talking to each other, so the warning is based on the number of connections that each node has towords the sister node in the cluster:

    CON_COUNT=`ssh -q rmquser@rmqnode01 netstat -p 2>/dev/null|grep -i est |tr -s " "|cut -d" " -f5,7|grep rmqnode|cut -d"." -f1,4 --output-delimiter=" "|cut -d" " -f1,3 |sort |uniq -c|wc -l`

This will get the number of connections from rmqnode01 to all other cluster members.
The count should be number of clustermembers - 1

If the number is less, then we have a split brain issue.
Also  RabbitMQ management console tell you at once that there is an issue.

A future thing is to capture the warning from the RabbitMQ managment console directory.




Sunday 20 April 2014

Splitting a catalina log on Tread dumps

This is a useful script for extracting and splitting thread dumps out of the catalina.out or any other output file.
Java thread dump stack trance is generated by sending hup signal to the jvm using kill -3.
This will generate tread dump on the console log.

The below script collects the dumps in individual files facilitating investigation.
Below is tested on Java 1.6.

cat split_THD.sh
set -x
#
#
#
#
#


FILE=$1

STARTLINE=(`grep -n "Full thread dump Java HotSpot(TM)" ${FILE} |cut -d":" -f1`)
ENDLINE=(`grep -n "JNI global references" ${FILE} |cut -d":" -f1`)

for ((i=0;i<${#STARTLINE[@]};i++))
do
        tail -n +${STARTLINE[$i]} ${FILE} | head -n $((${ENDLINE[$i]} - ${STARTLINE[$i]} + 10)) >thd_${i}
done

Wednesday 16 April 2014

Using Piping in a script

Just to document it, since i forget those things easily.
Below is an example email script based on sendmail.
The cat command will read anything coming from a pipe to the MESSAGE variable.

mail.sh:

# send am email to alert if something is not working
MESSAGE=`cat -`

/usr/sbin/sendmail.sendmail -i -t << ENDL
From: "Alert" <admin@khofo02.beren.tst>
To: <sherif.abdelfattah@beren.tst>
Subject:${1}

${MESSAGE}
ENDL

Monitoring RabbitMQ message Queues with NodeJS

RabbitMQ exposed a JSON based API from a web interface.
All the useful output is provided as Json documents that can be parsed by any application for monitoring purposed.
Best as easiest approach to do command line monitoring for RabbitMQ is to used NodeJS to encode the JSON output.
below is a quick sample:

q.js:

var fs = require('fs');
var file = process.argv[2];

fs.readFile(file, 'utf8', function (err, data)
{
if (err)
{
console.log('Error: ' + err);
return;
}

data = JSON.parse(data);

console.log(data[0].name,data[4].messages_ready);
console.log(data[1].name,data[4].messages_ready);
console.log(data[2].name,data[4].messages_ready);
console.log(data[3].name,data[4].messages_ready);

});


rmqmon.sh:

function tableit()
{
    echo "Please check Prod RabbitMQ. Queue counts are none Zero."
    echo " "
    printf "|%-25s|%-5s| \r\n" "-------------------------" "-----"
        printf "|%-25s|%-5s| \r\n" "QueueName" "Count"
        printf "|%-25s|%-5s| \r\n" "-------------------------" "-----"
        for i in `cat ${1}|tr " " ":"`
        do
                QUE=`echo ${i} |cut -d":" -f1`
                COUNT=`echo ${i} |cut -d":" -f2`
                #echo ${QUE}
                printf "|%-25s|%5d| \r\n" ${QUE} ${COUNT}
                #read
        done
        printf "|%-25s|%-5s| \r\n" "-------------------------" "-----"


}




OUT_PATH=/sherif/rmqmon
NODE_PATH=/sherif/node/bin



curl  -u qmon:qmon  http://rmqnode01:15672/api/queues/ >${OUT_PATH}/q.json 2>/dev/null
${NODE_PATH}/node ${NODE_PATH}/q.js ${OUT_PATH}/q.json >${OUT_PATH}/q.status
#cat ${OUT_PATH}/q.status
HAS_NONE_ZERO=`grep -v " 0" q.status`

if [ ! -z "${HAS_NONE_ZERO}" ]
#if [ -z "${HAS_NONE_ZERO}" ]
then
    tableit ${OUT_PATH}/q.status |/sherif/mail_alert2.sh "RabbitMQ Alert"
fi

The above will provide output only if  any of the queues has messages in them.
More sophisticated NodeJS or shell scripts can be built around these.
The above tabelit function is added for more cool email look :)
using poweful formatting of C like printf shell function.
Also the mail_alert2.sh uses shell piping as input, details on this found on another post.


Wednesday 26 March 2014

Exploring DotCMS Cont.

Looks like a very Robust CMS system that allows all kinds of content.
Also integrates many canvases for editing content either by typing HTML or using WUSIWUG.
The tool is very polished and the test site that comes with it is a very cool static site that is enterprise grade.

Though i didn't test how would it be able to handle lots of hits for content queries and didn't do comprehensive site creation with it.

Will continue to post more about it . . .

Thursday 20 March 2014

Exploring DotCMS

The dotcms tool is proving to be a great content management system for free.
It has all the bells and whistles out of the box.
just finished its installation and now started playing with it.
Proves to be rather solid and very well organized.

Will post more as it goes.
http://dotcms.com/

Sunday 2 March 2014

Generic multi instance startup script

Currently working on creating a generic multi instance start up script for TC server  (tomcat).
Features should include:
1- startall
2- stopall
3- start a set of TCs serving same part of the business
4- stop a set of TCs serving same part of the business
5- start 1 instance
6- stop 1 instance
7- print status of all TC instances

Once done will post the script here.
The script used a config file, will post it also :)

#set -x

if [ -z ${1} ]
then
        echo "missing property file"
        exit 1
fi
####
## need to initalize the APPS Associative array here.
typeset -A APPS
source ${1}

###
# Display the apps :)
#echo ${!APPS[@]}



####
## Setting action menu
actions="start stop status stopAll startAllDefault start_selective stop_selective"
if [ "$2" != "" ]; then
  action=$2
else
  echo "Please select an action:"
  select action in ${actions}; do
    case ${REPLY} in
      1|2|3|4|5|6|7) echo "Selection: ${action}"; break;;
      *) echo "Invalid Selection. Exiting."; exit 1;;
    esac
  done
fi
 
#set -x

if [ -z ${1} ]
then
        echo "missing property file"
        exit 1
fi
####
## need to initalize the APPS Associative array here.
typeset -A APPS
source ${1}

###
# Display the apps :)
#echo ${!APPS[@]}



####
## Setting action menu
actions="start stop status stopAll startAllDefault start_selective stop_selective"
if [ "$2" != "" ]; then
  action=$2
else
  echo "Please select an action:"
  select action in ${actions}; do
    case ${REPLY} in
      1|2|3|4|5|6|7) echo "Selection: ${action}"; break;;
      *) echo "Invalid Selection. Exiting."; exit 1;;
    esac
  done
fi
 
start|stop|status)
                for instance in `echo ${APP_SERVER_RUN_DEFAULT}|tr "," " "`
                do
                        HOST=`echo ${instance} |cut -d"_" -f1`
                        PORT=`echo ${instance} |cut -d"_" -f2`
                        echo ""
                        echo "Working on ${HOST}"
                        read -n1 -p "Do you want to preform the action: ${action} on ${instance} ?" reply
                        if [ "${reply}" = "y" ]
                        then
                                echo ""
                                ssh -q ${TARGET_USER_NAME}@${HOST} ${TARGET_TCS_INST_HOME}/${instance}/bin/tcruntime-ctl.sh ${action}
                        else
                                echo ""
                                echo "no action on ${instance}"
                        fi
                done
                echo "All Done"
        ;;

        start_selective)


                select APPkey in ${!APPS[@]}
                do
                        echo "${APPkey}: ${APPS[${APPkey}]}";
                        action=start
                        for instance in `echo ${APP_SERVER_RUN_DEFAULT}|tr "," " "`
                        do
                                HOST=`echo ${instance} |cut -d"_" -f1`
                                PORT=`echo ${instance} |cut -d"_" -f2`
                                FOUND=`echo ${APPS[${APPkey}]} |grep ${PORT} |wc -l`
                                #echo $FOUND
                                if [ ${FOUND} = "0" ]
                                then
                                        echo "skipping"
                                else
                                        echo ""
                                        echo "Working on ${HOST}"
read -n1 -p "Do you want to preform the action: ${action} on ${instance} ?" reply
                                        if [ "${reply}" = "y" ]
                                        then
                                                echo ""
                                                ssh -q ${TARGET_USER_NAME}@${HOST} ${TARGET_TCS_INST_HOME}/${instance}/bin/tcruntime-ctl.sh ${action}
                                        else
                                                echo ""
                                                echo "no action on ${instance}"
                                        fi
                                fi
                        done

                        #break the select loop
                        break
                done
        ;;

        stop_selective)

                select APPkey in ${!APPS[@]}
                do
                        echo "${APPkey}: ${APPS[${APPkey}]}";
                        action=stop
                        for instance in `echo ${APP_SERVER_RUN_DEFAULT}|tr "," " "`
                        do
                                HOST=`echo ${instance} |cut -d"_" -f1`
                                PORT=`echo ${instance} |cut -d"_" -f2`
                                FOUND=`echo ${APPS[${APPkey}]} |grep ${PORT} |wc -l`
                                #echo $FOUND
                                if [ ${FOUND} = "0" ]
                                then
                                        echo "skipping"
                                else
                                        echo ""
                                        echo "Working on ${HOST}"
                                        read -n1 -p "Do you want to preform the action: ${action} on ${instance} ?" reply
                                        if [ "${reply}" = "y" ]
                                        then
echo ""
                                                ssh -q ${TARGET_USER_NAME}@${HOST} ${TARGET_TCS_INST_HOME}/${instance}/bin/tcruntime-ctl.sh ${action}
                                                ssh -q ${TARGET_USER_NAME}@${HOST} rm -rf ${TARGET_TCS_INST_HOME}/${instance}/work/*
                                                ssh -q ${TARGET_USER_NAME}@${HOST} rm -rf ${TARGET_TCS_INST_HOME}/${instance}/temp/*
                                                ssh -q ${TARGET_USER_NAME}@${HOST} "find ${TARGET_TCS_INST_HOME}/${instance}/webapps/* -type d|grep -v ROOT|xargs rm -rf"
                                        else
                                                echo ""
                                                echo "no action on ${instance}"
                                        fi
                                fi
                        done

                        #break the select loop
                        break
                done

        ;;


esac
 

Tuesday 18 February 2014

Exploring SVN / GIT and Jenkins integration

Was working on creating a test SVN repository for testing purposes.
I have worked extensively with CVS before in my experience in HP enterprise services, but done limited work with SVN.

SVN is much better CM tool and offers a set of decent web interfaces and integrates well with apache.
Jenkins also supports GIT, which would be my next logical step.
Jenkins can do a lot integrating with a CM tool from doing old style deployment to distributing codes and scripts to different parts of a given system.
Jenkins has the ability to managed remote nodes and can even work with standard password authentication. handles all the overhead beautifully :)

Setting up GIT is similar to SVN.
Now following the below link to setup a GIT repository for testing.
http://git-scm.com/book/en/Git-on-the-Server-Setting-Up-the-Server

integrating git with Jenkins would work straight forward similar to SVN.

 

Monday 3 February 2014

TC server JMX trap !

From: http://static.springsource.com/projects/tc-server/2.0/admin/htmlsingle/admin.html#manual



Warning: The value of the bind attribute of JmxSocketListener overrides the value of the java.rmi.server.hostname Java system property. This directly affects how names are bound in the RMI registries; by default, the names will be bound to localhost (127.0.0.1.) This in turn means that RMI clients running on a different host from the one hosting the tc Runtime instance will be unable to access the RMI objects because, from their perspective, the host name is incorrect. This is because, the host should be the name or IP address of the tc Runtime computer rather than localhost. When the tc Runtime instance starts, if it finds that the value of the bind attribute is different from or incompatible with the java.rmi.server.hostname Java system property, the instance will log a warning but will startup anyway and override the system property as described. If this causes problems in your particular environment, then you should change the value of the bind attribute to specify the actual hostname on which the tc Runtime runs rather than the default 127.0.0.1 value.


The above has cause me to work for 4 hours looping in circles !!!
Tomcat just rocks !!

Thursday 30 January 2014

JMX on TC Server (Tomcat)

Currently working on a PoC to enable remote JMX on TC server.

The config to do this goes into 3 places:
1- Setenv.sh:
needs to put in JVM parameters to allow JMX remotely using authentication

JMX_OPTS="-Dcom.sun.management.jmxremote
  -Dcom.sun.management.jmxremote.port=16001
  -Dcom.sun.management.jmxremote.ssl=false
  -Dcom.sun.management.jmxremote.authenticate=true
  -Dcom.sun.management.jmxremote.password.file=${CATALINA_BASE}/conf/jmxremote.password
  -Dcom.sun.management.jmxremote.access.file=${CATALINA_BASE}/conf/jmxremote.access"

2- jmxremote.access:
[root@khofo05 conf]# cat jmxremote.access
#admin readonly
admin readwrite
[root@khofo05 conf]#

3- jmxremote.password
[root@khofo05 conf]# cat jmxremote.password
# The "admin" role has password "springsource".
admin springsource
[root@khofo05 conf]#

The above is sufficient to have an authenticated remote JMX up and running on any tomcat.

I wanted to explore using SSL, for more protection since JMX would allow altering server parameters if readwrite rule is required, but since I need JMX  for monitoring purpose is would just need to have read only rules.

After some reading and consulting my colleges i decided to abandon JMX and go with Jstatd instead.

AWK string manipulations

The below script is used to scan a full text file line by line for a certain pattern.
Once this pattern is found,
The script does another pattern matching to do another change in that same line.

[root@khofo05 sherif]$  awk -v FILENAME=./server.xml_20131127_1 -f ./change_ds.awk >server.xml
[root@khofo05 sherif]$ cat change_ds.awk
BEGIN{
        i=0;
        line="";
        while (getline line < FILENAME)
        {
                gotit=match(line,"DB1");
                if (gotit == 0)
                {
                        print(line);
                }
                else if (gotit > 0 )
                {
                        #print ("match------",line);
                        newsvr=gsub("db111oracle.example.com","db123oracle.example.com",line);
                        newsid=gsub("DB1","DB3",line);
                        #print (newsvr,newsid,line);
                        print (line);
                        #break;
                }
        }
}
[root@khofo05 sherif]$



The above is an example for a mass change in a server.xml file for changing a Datasource definition from a SID/host to another SID/host.
Would be useful if multiple datasources are involved.

Wednesday 29 January 2014

Extracting a part of the JVM startup command

This  One is used to extract the Catalina base directory from a running TC server process.
This is useful for monitoring what applications are being deployed on those TC instances.

 
[root@khofo05 ~]# ps -ef |grep java|grep 18001
root     20342     1  0 Jan20 ?        00:01:05 /apps/admngop1/springsource/jdk1.6/bin/java -Djava.util.logging.config.file=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001/conf/logging.properties -Xmx512M -Xss192K -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=16001 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001/conf/jmxremote.password -Dcom.sun.management.jmxremote.access.file=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001/conf/jmxremote.access -Djava.util.logging.manager=com.springsource.tcserver.serviceability.logging.TcServerLogManager -Djava.endorsed.dirs=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/tomcat-6.0.35.A.RELEASE/endorsed -classpath /apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/tomcat-6.0.35.A.RELEASE/bin/bootstrap.jar -Dcatalina.base=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001 -Dcatalina.home=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/tomcat-6.0.35.A.RELEASE -Djava.io.tmpdir=/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001/temp org.apache.catalina.startup.Bootstrap start



[root@khofo05 ~]# JAVA_CMD=`ps -ef |grep java|grep 18001`
[root@khofo05 ~]# CATALINA_BASE=`echo ${JAVA_CMD##*Dcatalina.base}|cut -d "=" -f2|cut -d" " -f1`

[root@khofo05 ~]# echo $CATALINA_BASE
/apps/admngop1/springsource/vfabric-tc-server-standard-2.6.3.RELEASE/khofo05_18001
[root@khofo05 ~]#


This would allow to check the webapps folder, the conf and others.
Thought to document the bash string concat constructs, never used it before :)



Tuesday 28 January 2014

Running a Shell script from an HTML page using NodeJs as backend

The below code is a modified version from the one used for PHP on this link: http://www.scriptol.com/javascript/nodejs-php.php

This functionality could be useful in doing many automation tasks for Ops, though could be done by other tools like Puppet or even Jenkins but NodeJs is very light weight and quite fast.
Would plan to integrate this in my HTML status pages in the future.

The code:

var sys = require("sys"), 
http = require("http"),   
path = require("path"),
url = require("url"),
filesys = require("fs"),
runner = require("child_process");

function sendError(errCode, errString, response)
{
  response.writeHead(errCode, {"Content-Type": "text/plain;charset=utf-8"});
  response.write(errString + "\n");
  response.end();
  return false;
}

function sendData(err, stdout, stderr, response)
{
  if (err) return sendError(500, stderr, response);
  response.writeHead(200,{"Content-Type": "text/plain;charset=utf-8"});
  response.write(stdout);
  response.end();
}

function runScript(exists, file, response)
{
  if(!exists) return sendError(404, 'File not found', response);
  runner.exec(file ,
   function(err, stdout, stderr) { sendData(err, stdout, stderr, response); });
}

function myshell(request, response)
{
  var urlpath = url.parse(request.url).pathname;
  //var param = url.parse(request.url).query;
  var localpath = path.join("/", urlpath);
  console.log(localpath);
  filesys.exists(localpath, function(result) { runScript(result, localpath, response)});
}

var server = http.createServer(myshell);
server.listen(7071);
console.log("Shell ready to run script given on port 7071.");



HTML:

<html>

<head>
<meta http-equiv="refresh" content="600" />
</head>
               <body>
                <br>
                <a href=http://localhost:7071/bin/ls>Run ls Locally</a>
                <br>
                <br>
        </body>
</html>

Jstatd configuration for JVM monitoring

Currently I am busy setting up configuration for running Jstatd on a multiple set of VMs runing Spring source TC servers.
The setup is simple, use a script to start the jstatd locally and invoke the script from a master VM to start on all nodes.
Jstatd is useful for memory monitoring for performance testing of Java applications and can provide realtime info about the JVM execution.

In our case Jstatd is much better than using traditional JMX parameters on JVM, JMX would need to be secured and have a strong authentication and authorization in place to secure the application.
This can be done, but the amount of config changes wouldn't be easily done unless we have a Dev-Ops tool like puppet to handle.

Jstatd is zero config from application point of view, just put the policy file and start the deamon and ur done !!

To start Jstatd on a server you need a policy file: jstatd.all.policy
 That contains:
grant codebase "file:${java.home}/../lib/tools.jar" { permission java.security.AllPermission; };

and just run as:

JAVA_HOME/bin/jstatd -J-Djava.security.policy=jstatd.all.policy

Jstatd uses port 1099 (RMI) to expose JVM info like memory and thread usage.
Jstatd also exposes all JVMs running on the same server using one jstatd process.

We can use jvisualvm tool provided by Java SDK to connect to remote Jstatd and browse the needed info.

Thursday 9 January 2014

Web based status Page

Just finished creating a set of shell scripts to create dynamic web based status pages for Support Zone application.
The requirement is to have a full map of what web applications are deployed and which services are being active.
The page updates every 10 mins by a shell script that runs on crontab; the script regenerates the html pages dynamically by checking the services based on a set of config files that holds what needs to be checked.

below how it looks like:

The tables above are iframes that show the html generated by the shell scripts.
if the port number is shown in red, it means the service is not running on this port, could be that a service is down or the config needs to be updated.
Also there is email alerting functionality available in case something seems to be wrong.

The checks are being done by curl pinging web interfaces of those services, if the response didn't come in a defined amount of time (20 sec) the port will show as red.

Those scripts can be used dynamically for checking any web / internet  based service that listens on a socket  . . .
Currently vfabric Hyperic monitoring tool doesn't offer a similar summary page, so those custom scripts were created to fill in this gap.
Other tools can offer similar or superior interfaces like HP BAC /Sitescope or CA Intrascope.
Those tools offer performance information and deeper insite on business transactions and internal JVM operation.



Automation is good  :)

Monday 6 January 2014

SQLFire CLI

We have been doing manual SQLFire deployment ever since it was enrolled as a caching solution in the Service Support Zone project.

Part of the deployment is to do standard war file deployment on TC server, which is easy to automate.
The issue was with running SQL queries that needed to be run at various stages before and after the deployment.
SQLFire doesn't offer a standalone client like Oracle, we need to write SQL scripts and pass it to the comand line interface script: sqlf.
The command runs as follows:

${SQLFIRE_HOME}/bin/sqlf run -client-bind-address=${NODE} -client-port=${PORT} -user=${USERNAME} -password=${PASSWORD} -file=${SQL_SCRIPT}

I am exploring how this simple command can be used to run remotely from a Jenkins job to support fully automated SQLFire deployment.

Managed to use the command for doing table backups :)
But faced an issue with running this from crontab, needs to have all the environment variables defined before it runs so i had to issue the command using bash -l from within cron.

Sunday 5 January 2014

Collecting System information from 230+ VMs

Today I was working on collecting system info from 230+ VM composing the entire environment of the EMC Service Support Zone project.
Those proved to be a bit of challenge since many of them didn't have SSZ keys installed on the control server and I had to do manually :)
Though, would be a lot of help if this can be accomplished as it would server all future automation .
Below is a link on how to setup SSH keys:
https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2


Though Puppet agent would have abstracted all this if it is installed on the VMs.
Will need to see how this can be accomplished.

This task would also help automating vfabric SQLFire  deployment.
Will talk about this later.