Tuesday 26 July 2016

Setting up RabbitMQ and its config files

This post is to document how to install rabbitMQ using the generic linux distribution that comes as a tar.gz package.

To proceed with the installation follow the below steps:
1- download latest RabbitMQ from pivotal: https://network.pivotal.io/products/pivotal-rabbitmq, lets assume we will work using pivotal-rabbitmq-server-3.5.1

2- extract the downloaded file to the installation location the tar.gz package will be self contained except for erlang liberaries, you need to install erlang from epel erlang yum / dnf repositories.

2- Once installed, we need to edit some config files to allow the rabbitmq server to run:

${RABBITMQ_HOME}/sbin/rabbitmq-defaults :
we need to set the variable ${RABBITMQ_HOME}, this will automatically populate the rest of the variables:

[root@khofo02 sbin]# cat rabbitmq-defaults |grep -v "^#"
RABBITMQ_HOME=/root/pivotal-rabbitmq-server-3.5.1/
SYS_PREFIX=${RABBITMQ_HOME}
ERL_DIR=
CLEAN_BOOT_FILE=start_clean
SASL_BOOT_FILE=start_sasl
BOOT_MODULE="rabbit"
CONFIG_FILE=${SYS_PREFIX}/etc/rabbitmq/rabbitmq
LOG_BASE=${SYS_PREFIX}/var/log/rabbitmq
MNESIA_BASE=${SYS_PREFIX}/var/lib/rabbitmq/mnesia
ENABLED_PLUGINS_FILE=${SYS_PREFIX}/etc/rabbitmq/enabled_plugins
PLUGINS_DIR="${RABBITMQ_HOME}/plugins"
IO_THREAD_POOL_SIZE=64
CONF_ENV_FILE=${SYS_PREFIX}/etc/rabbitmq/rabbitmq-env.conf
[root@khofo02 sbin]#




${RABBITMQ_HOME}/etc/rabbitmq/rabbitmq-env.conf :
We need to set the NODENAME env. variable, this is not mandatory, but it helps if we are going to run multiple RabbitMQ nodes on this host.
Also any other env. variable that needs to be defined can be placed in this file:
 
[root@khofo02 sbin]# cat ../etc/rabbitmq/rabbitmq-env.conf
#example rabbitmq-env.conf file entries
#Rename the node
NODENAME=rabbitx1
#Config file location and new filename without the config extention
#CONFIG_FILE=/root/pivotal-rabbitmq-server-3.5.1/etc/rabbitmq/rabbitmq


[root@khofo02 sbin]#


Note that the CONFIG_FILE variable is already set in the rabbitmq-defaults, so we are commenting this out in the rabbitmq-env.conf.


Note that any reference to rabbitMQ config file is done without its .config extension, thus the variable will have a value:

ONFIG_FILE=/root/pivotal-rabbitmq-server-3.5.1/etc/rabbitmq/rabbitmq


While the file on the system is:

/root/pivotal-rabbitmq-server-3.5.1/etc/rabbitmq/rabbitmq.config



${RABBITMQ_HOME}/etc/rabbitmq/rabbitmq.config :
This is the main config file for RabbitMQ, it will contain most of the configs for rabbitMQ and its plugins.
For this example we are putting configurations for management and stomp plugins.

[
    {rabbit, [{tcp_listeners, [5674]}, {loopback_users, []}]},
    {rabbitmq_management, [{listener, [{port,15674}, {ip, "0.0.0.0"}]}]},
    {rabbitmq_stomp, [{tcp_listeners, [{"0.0.0.0",61614}]},
              {default_user, [{login, "guest"},
                                      {passcode, "guest"}
                                     ]
                      },
                      {implicit_connect, true}
                     ]}
  ].

Here we are changing the default port from 5672 to 5674 using the rabbit tcp_listeners parameter.
Since the default for the guest user it only to login locally, we set the loopback_users parameter to an empty string "[]" this will allow guest to login from all interfaces.

Also to force the management plugin to listen on all interfaces we use the ip parameter set to 0.0.0.0, also the port number is changed to 15674 using listener port parameter.

For stomp we are defining a set of parameters:
  • tcp_listener on all interfaces and port is 61614, default port is 61613.
  • default_user: login as guest, password as guest.
  • implicit_connect is set to true.

Now we are ready to enable the plugins that we are going to use.
To do this we go to ${RABBITMQ_HOME}/sbin and run the commands below.
it will git output similar to the below:


[root@khofo02 sbin]# ./rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  mochiweb
  webmachine
  rabbitmq_web_dispatch
  amqp_client
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbitx1@khofo02... failed.
 * Could not contact node rabbitx1@khofo02.
   Changes will take effect at broker restart.
 * Options: --online  - fail if broker cannot be contacted.
            --offline - do not try to contact broker.
[root@khofo02 sbin]# ./rabbitmq-plugins enable rabbitmq_stomp
The following plugins have been enabled:
  rabbitmq_stomp

Applying plugin configuration to rabbitx1@khofo02... failed.
 * Could not contact node rabbitx1@khofo02.
   Changes will take effect at broker restart.
 * Options: --online  - fail if broker cannot be contacted.
            --offline - do not try to contact broker.
[root@khofo02 sbin]#

Once we enable the plugins, we can start rabbitMQ server, from ${RABBITMQ_HOME}/sbin run:

[root@khofo02 sbin]# ./rabbitmq-server -detached
Warning: PID file not written; -detached was passed.
[root@khofo02 sbin]#


RabbitMQ logs at: ${RABBITMQ_HOME}/var/log/rabbitmq/rabbitx1.log
Where rabbitx1 is the name we gave above for the rabbit instance.
If all goes well, the log will looks like this:
[root@khofo02 sbin]# tail "/root/springsource/pivotal-rabbitmq-server-3.5.6//var/log/rabbitmq/rabbitx1.log"

=INFO REPORT==== 26-Jul-2016::08:28:38 ===
Server startup complete; 7 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * webmachine
 * mochiweb
 * rabbitmq_management_agent
 * rabbitmq_stomp
 * amqp_client
[root@khofo02 sbin]#

Finally to stop RabbitMQ, we run the command:

[root@khofo02 sbin]# ./rabbitmqctl stop
Stopping and halting node rabbitx1@khofo02 ...
[root@khofo02 sbin]#


I will discuss the clustering of RabbitMQ in another post.


Monday 25 July 2016

Mysql root password reset

I have been seeing this many times over my career, the need to reset mysql root password.

This could be done in 2 ways.

1- you know the root password and want to reset it:

This is a simple case, we need not bring down the DB, this can be done from any terminal that can login as root to mysql, usually the local server running mysqld.
to do this do:

login to mysql client with the password we have:

# mysql -u root -p

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

or 

mysql> UPDATE mysql.user
    SET authentication_string = PASSWORD('MyNewPass'), password_expired = 'N'
    WHERE User = 'root' AND Host = 'localhost';
FLUSH PRIVILEGES;




2- you lost the root password and want to reset it:
This would need to use the generic way to do the reset:
To do it run:

stop mysql services

# systemctl stop mysql.service

then bring up mysql with --skip-grant-tables & --skip-networking this is to prevent external users to connect to our server while all privileges are granted to all users.

# mysqld --skip-grant-tables --skip-networking --user=mysql --pid=/var/run/mysqld/mysqld.pid

the login as root:

# mysql -u root

mysql>FLUSH PRIVILAGES;
mysql>ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

Running "FLUSH PRIVILAGES" is to reload the whole permission system in mysql so that we can change the root password.
Once this is done, we need to kill the skip-grant-tables instance and then bring up mysql normally.

The MySQL documentation describes this quite well at http://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html


Tuesday 19 July 2016

Simple apache archiva crawler

I had a requirement to be able to identify new application deployed on the fly.
The applications are pushed to an apache archiva repository that will be used to store the application snapshots.

Also the requirement mentions that we need to identify all new released apps automatically for older deployed apps.

To be able to accomplish this, we need to have a simple crawler to go and looks for all the latest war files stored in archiva.

below is a simple script to do this:

ARCHIVA_BASE="http://archiva:8080/archiva/repository/snapshots/com/sherif/"
BASEURLS=`curl -s ${ARCHIVA_BASE} |grep "<li><a href=" |cut -d"\"" -f2 |grep -v "\.\./"`
for pUrl in `echo ${BASEURLS}`
do
        #echo ${ARCHIVA_BASE}${pUrl}
        LAST_SNAPSHOT=`curl -s ${ARCHIVA_BASE}${pUrl} | grep "<li><a href=" |cut -d"\"" -f2|egrep -v "xml|\.\./"|egrep "[0-9]+.[0-9]+.[0-9]+" |tr "." ","|sort -n -t"," -k1,2|tr "," "."| tail -1`
        if [ "x${LAST_SNAPSHOT}" = "x" ]
        then
                continue
        else
                #echo ${ARCHIVA_BASE}${pUrl}${LAST_SNAPSHOT}
                WAR=`curl -s ${ARCHIVA_BASE}${pUrl}${LAST_SNAPSHOT} | grep "<li><a href=" |cut -d"\"" -f2 |egrep ".war$"|egrep -v "md5|sha|pom"`
                if [ "x${WAR}" = "x" ]
                then
                        continue
                else
                        echo ${ARCHIVA_BASE}${pUrl}${LAST_SNAPSHOT}${WAR}
                fi


        fi
        #read
done


The script uses some assumptions as per the requirement:

1- the application folders are immediately under the archiva base URL above.
2- only war files will be deployed, using a small modification we can also get other file types.
3- the release snapshot folders are immiadiately under the application folders
4- the release snapshot folders have the format "11.22.33{Anystring}" thus conatins 3 number sections and those are used for sorting.
5- the war file are immediate found under the snapshot folder.
6- Each snapshot folder contain only 1 war file.


Wednesday 13 July 2016

Fixing Docker SSL problems in Centos

I have been blocked by SSL issues and unable to use docker for quite a while now.
I did some reading and found out some useful info in OpenSSL documentation and on a user blog about docker.

The issue is my company uses its own SSL cert to re-encrypt all SSL traffic after it is filtered in the company internal network.
The Root CA cert is not trusted by all browsers and tools thus needs to be imported to make your life less painful :)

To import a cert on Centos we need to save it under the below path:

/usr/share/pki/ca-trust-source/anchors

The anchors folder should contain certs that are in PEM format.
Once the cert is saved, you need to run the command:

 update-ca-trust

This will update the system wide trust store at:

/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt

This file is linked under:

/etc/ssl/certs

Once this is done.

you need to follow the steps in this wiki:
http://richmegginson.livejournal.com/27936.html

In my case, the steps are:


 cd /etc/docker/certs.d
 mkdir dseab33srnrn.cloudfront.net
 cd dseab33srnrn.cloudfront.net
 ln -s /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
 systemctl restart docker

Each time a docker pull is needed from a certain web host, we need to execute the last part of the steps so that docker can trust the cert.

Thanks for Rich Meggisnson for solving this issue by looking up the docker code.

Monday 11 July 2016

Rundeck user management

To add a new user to rundeck, we need to edit the file:
rundeck/server/config/realm.properties

the file looks like this:

$ cat realm.properties
#
# This file defines users passwords and roles for a HashUserRealm
#
# The format is
#  <username>: <password>[,<rolename> ...]
#
# Passwords may be clear text, obfuscated or checksummed.  The class
# org.mortbay.util.Password should be used to generate obfuscated
# passwords or password checksums
#
# This sets the temporary user accounts for the Rundeck app
#
admin:admin,user,admin,api_token_group
user:user,user
sherif:sherif,otherusers,user,api_token_group


To Authorize the user to have certain privilages, we create a new policy file at:
rundeck/etc/otherusers.aclpolicy

$ cat otherusers.aclpolicy

description: Limited user access for adm restart action
context:
  project: 'someproj.*'
for:
  resource:
    - allow: [read]
  job:
    - allow: [read,run,kill]
  node:
    - allow: [read,run,refresh]
by:
  group: [otherusers]
---
description: Limited user
context:
  application: 'rundeck'
for:
  #resource:
   # - equals:
    #    kind: system
    #  allow: [read] # allow read of system info
  project:
    - match:
        name: 'someproj.*'
      allow: [read]
by:
  group: [otherusers]
$


This policy will grant the group "otherusers" limited access to just be able to see run and kill jobs for the projects matching "someproj.*" pattern.
This policy is a modified copy from admin policy.

Both the policy and the realm files will be loaded automatically by rundeck, no restart is required.

HAproxy configuration as an HTTP router / balancer

HAproxy is a very flexible reliable TCP proxy and balancer middleware.
It can be used as a generic TCP proxy / port mapper or as a TCP load balancer.
It also supports using HTTP protocol mode where it is able to work as an http proxy server and loadbalancer.

In this post i will focus on the http mode, it is used more for implementing web proxies providing high availability for web applications. below is a sample config for HAproxy as an http proxy and router:


#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#   http://haproxy.1wt.eu/download/1.3/doc/configuration.txt
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log         127.0.0.1 
    maxconn     400000
    user        sherif
    group       sherif
    daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode        http
    log         global
    option      dontlognull
    option      httpclose
    option      httplog
    option      forwardfor
    option      redispatch
    timeout connect 100000 # default 10 second time out if a backend is not found
    timeout client 600000
    timeout server 600000
    maxconn     600000
    retries     3
    errorfile 503 /etc/haproxy/errors/503.http
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  main vardamire:8080
acl app1_url                     path_beg     /app1
acl test_url                    path_beg    /tst
acl app2_url                    path_dir    app2
###################################################################
use_backend app1_8080                        if   app1_url
use_backend app2_8081                        if   test_url app2_url   
use_backend default_services
###################################################################
backend app1_8080
    balance     roundrobin
    option httpchk GET /app1/alive
    cookie JSESSIONID prefix nocache
    server      app1    server1:8080 check cookie a1
    server      app2    server2:8080 check cookie a2
    server      app3    server3:8080 check cookie a3 


backend app2_8081
    balance     roundrobin
    option httpchk GET /tst/app2/alive
    server      app1     server2:8081 check
    server      app2     server3:8081 check
 

backend default_services
   server app1 web.toplevelbalancer:8080 check

listen stats *:1936
    stats enable
    stats uri /
    stats hide-version
    stats auth admin:admin


The above config is an example for doing http routing and balancing based on URL patterens.
Also it shows how HAproxy can handle session stickiness.

The keyword acl defines a url pattern using either pathbeg (path begins) or path_dir (path directory portion).
Then the backend keyword is used to define a couple of application backends to HAproxy, those will do the actual serving of the content.
the balance keyword is used to tell HAproxy to do a round robin load balancing between the defined servers, also it adds the option httpchk which will do an http check on the given URI for each defined server to determine if it is up or not.
Also cookie keyward is used to append a part to the JSESSIONID cookie and have it checked by HAproxy to be able to maintain session stickiness. HAproxy will prefix JSESSIONID with the cookie defined in each server and thus will be able to keep track which session goes to which server.

Lastly we are enabling HAproxy statistics so that we can monitor the status of our backends and also the stats of the requests coming to them.

This config was used successfully to route and balance 40+ services for a big project and it working fairly smooth even under load testing.

HAproxy is very light weight and can handle 10s of thousands of connections without issues.



Sunday 10 July 2016

Some SQL tips used for a puppet migration project

Puppet console stores its info in a DB, usually postgres, we needed to extract some info from that for a puppet migration project.
the tables we were looking at where nodes table and parameters table.
Those needed to be joined so we can extract all the parameters that are defined for a node as follows:


select nodes.name, parameters.key, parameters.value from nodes,parameters where
nodes.id = parameters.parameterable_id
order by nodes.name

The output of this query was exported and cleaned up with "sed" so that we can import it as a table once more.
however, puppet stores the data as key value items tied to each node name, thus if you have 3 paramters per node, you end up with 3 records with same node name and different key value pairs.

To transform this to a relational DB like table, I used this query with help of my colleague Mohamed Youssef, our Senor DB engineer:


create view nodes as (
select r.name , a.value as val1 ,r.value as val2, g.value as val3
from
(select name, value from param where key='val1') r , (select name, value from param where key='val2') a , (select name, value from param where key='val3') g
where r.name=g.name
and r.name=a.name )