Thursday 25 August 2016

Python based SSH / password client

Previously I have used expect to be able to automating doing ssh to hosts that don't have password-less configured yet.
expect is a bit confusing to use and it took me a bit of time to compile a good reusable expect script. (you can find more info about it in this older post)

Python provides a more elegant solution using the Paramiko SSH2 liberary.
Using this library we can create a simple ssh client that can take a host, a username, a base64 encrypted password and a command string from the command line and then execute it on the remote host.

This can then be used further inside any script similar to the SSH command.

Those scripts are created on Centos 7 using the Python 2.7 Centos packages. Paramiko came along with my Python installation but stil you can install it if it is missing using:

# pip install paramiko

Centos doesn't come with pip, so you need to install it using yum.

Now, let me show a very small password encoder so we can use the base64 encryption to hid the password used on command line:

[root@feanor ~]# cat ./b64encode.py
#!/usr/bin/python

import base64
import sys

string = sys.argv[1]
codedstring = ''
codedstring = base64.b64encode(string)

print string, codedstring
[root@feanor ~]#


[root@feanor ~]# ./b64encode.py mypassword
mypassword bXlwYXNzd29yZA==
[root@feanor ~]#



The above will generate the coded password and can be run in a secure machine, the base64 encoded password then needs to stored in a secured file (user only access , eg: mod 600) and used for our script as below:

[root@feanor ~]# ./sshconnect.py localhost root "`cat ~/passwordfile`" "uname -a;hostname"



Some might argue that this base64 can be cracked easily, but I would say it is better than using a plan text password.



The actual script can be found below:

#!/usr/bin/python

import sys
import paramiko
import base64

if len(sys.argv) < 5:
    print "args missing"
    sys.exit(1)

hostname = sys.argv[1]
username = sys.argv[2]
b64password = sys.argv[3]
command = sys.argv[4]

password = base64.b64decode(b64password)

port = 22
 

try:
    clientSession = paramiko.SSHClient()
    clientSession.load_system_host_keys()
    clientSession.set_missing_host_key_policy(paramiko.WarningPolicy())
    clientSession.connect(hostname, port=port, username=username, password=password)
    stdin, stdout, stderr = clientSession.exec_command(command)
    print stdout.read()
finally:
    clientSession.close()





Again it is always better to use SSH public key authentication, most of the time i use this method once so that i can set the keys and never use the password for logging into a machine any more.
Passwords need to be strong and long 15 chars at least if they must be used.

An example random password generators can be found at:

http://passwordsgenerator.net/  (Should be using https for this site, though it offers a good set of password generation options)
https://www.random.org/passwords/ (Use advanced mode and change the seed)
https://strongpasswordgenerator.com/
https://www.grc.com/passwords.htm   (Really strong 64 char passwords !!)

Wednesday 24 August 2016

Changing puppet master used by an agent node

This is how to change the puppet master used by an agent node.
The agent needs to connect to a new master and get its config from that new master onward.

The steps below pertain to Puppet Enterprise agent 3.x.
To do this we basically need to do 3 things.

1- remove the old master SSL certificates from the agent filesystem.
go to the folder: /etc/puppetlabs/puppet under agent filesystem and remove the ssl folder or move it to a new name. then create an empty ssl folder.

2- on the same folder edit the config file puppet.conf
the main section should show the new puppet master hostname in the server tag:

[main]
   server = mynewpuppetmaster.host.name

3- once the above is done, issue a puppet run using:

# puppet agent -t

This will trigger the puppet agent to create a new ssl certs and requests issued for the new puppet master and the cert request will automatically go to the new server.


On the server the request goes to /etc/puppetlabs/puppet/ssl/ca/requests.
you can list the outstanding requests using:

# puppet cert list

from the puppet enterprise web console you can accept the new cert request or you can do from the command line using:

# puppet cert sign mypuppetagent.host.name

One issue i saw while doing this was that the filesystem was 100% full and the cert requests to the master failed and never shown up in the console.
This happened because the request pem file was not able to be written to the requests directory on the master.
The error shown was like this when running:

# puppet cert list
Error: header too long

and also on the client side while running:

# puppet agent -t

Info: Caching certificate for ca
Error: Could not request certificate: Error 400 on SERVER: header too long


To fix this just simply ensure the filesystem used by puppet master is not 100% full.


Wednesday 17 August 2016

Hardening Your Apache Configuration

Exposing apache webserver to the public internet is a serious business.
Any exposed service need to be secured and only expose "the needed" functionality "only" as per the application need.

Apache comes by default with many things disabled, still some more work needs to be done to ensure your server is secure and not open to easy attacks.

Below are some examples of configurations that could help make apache more secure.

Hiding the version of Apache:

We need to hid all the info exposed by apache about the host machine and about itself. To do this we need to set the below in the global config section:

ServerSignature Off
ServerTokens Prod


Also we can try to use mod_headers to unset the server header, though most of apache binary distributions has this hard-coded.

Header unset Server

 

Turn off directory browsing:

Apache by default allows directory listings if we use file system derived URLs under the default document root.
This is an issue as it can expose a lot of info about the local files served by apache. to stop this we need to disable indexes in location scopes:

Options -Indexes

Disabling TRACE method:

We need to disable all the HTTP methods that are not going to be used by the application, most importantly Trace method:

To do this we do either of the below configurations:

TraceEnabled Off

or:

RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]


Enable Mod_reqtimeout to prevent Slowloris attack:


This is to prevent apache resources from being depleted by slow connections that tries to hold resources for longer duration leading eventually to denial of service.
To battle this attack we need to enable mod_reqtimeout and set its parameters with appropriate values according to the appliaction.
Below values are only for demo perposes:

LoadModule reqtimeout_module    "mod_reqtimeout.so"

<IfModule mod_reqtimeout.c>
   RequestReadTimeout header=120,MinRate=500 body=120,MinRate=500
</IfModule>


Remove the default Error document:

Apache uses a set of default error documents that are presented on http error codes like 404 or 500.
Those pages expose a lot of info about apache and it is better to replace them with custom pages.
A minimum config is shown below for most common error pages:

ErrorDocument 500 "Internal Server Error"
ErrorDocument 404 "The Requested URI is not found "
ErrorDocument 503 "Service Not found"
ErrorDocument 403 "Forbidden"

Disable all not needed modules:

Need to disable all the modules not needed by application, simply comment them and only enable the ones that your application needs.


A more comprehensive list that covers further hardening and security issue fixes can be found in the below link:

https://geekflare.com/apache-web-server-hardening-security/

Wednesday 10 August 2016

Overiding backend service error page when using apache proxypass config

Most of the apps that I have supported before are large enterprise grade apps that use stand alone content management software and a large team of content authoring and publishing.
Most of the time those guys take care of every content related staff like site version notes and error pages.

I came across a smaller project this week and I was asked to handle the errors coming out of the application for security reasons.
The application was hosted by tomcat and had apache infront of it to do proxy pass work and Shibboleth authentication.

Tomcat default error page is is very basic yet give a way a lot of info about the tomcat version your application runs, thus it is a good idea to hid it if possible.

Simplist way to do this is from apache using the ProxyErrorOverride set to on as seen below:


ProxyPass "/" "http://backend_tomcat_host:8080/"
ProxyPassReverse "/" "http://backend_tomcat_host:8080/"

ProxyErrorOverride On
ErrorDocument 500 "Internal Server Error"
ErrorDocument 404 "The Requested URI is not found "
ErrorDocument 503 "Service Not found"
ErrorDocument 403 "Forbidden"

The above will offer the user the most basic error pages possible and will hid all the Tomcat details.
More complex Error pages can be used by replacing the above simple text with a URI of an html error page.

For apache 2.2.x the full documentation is available at: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#proxyerroroverride


Monday 8 August 2016

RabbitMQ Clustering configurations

In this post we will see how to cluster rabbitMQ in 2 configurations.
one using same host, called a vertical cluster, and other using 2 differents hosts called the horizontal cluster.


Create Vertical clustering


1- Start RabbitMQ with differant node names and port numbers similar to below:

RABBITMQ_NODE_PORT=5672 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15672}]" RABBITMQ_NODENAME=rabbit ./rabbitmq-server -detached
RABBITMQ_NODE_PORT=5673 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15673}]" RABBITMQ_NODENAME=rabbit2 ./rabbitmq-server -detached

 
using the above env. variable form of starting up rabbitMQ vs using config file; enables us to use diffent ports and nodenames.
The config file should be empty in this case.
The above will start 2 RabbitMQ nodes on the same VM each one has its unique name and has its own uniq set of ports.
2- To enable the 2 nodes to be in a single cluster with one node as DISC and one as RAM do the below:

./rabbitmqctl -n rabbit2@beren stop_app -> stop the rabbitMQ application but node is still running on host beren
./rabbitmqctl -n rabbit2@beren join_cluster --ram rabbit@beren -> Add rabbit2 as RAM node to rabbit cluster, default is DISC
./rabbitmqctl -n rabbit2@beren start_app -> Start the rabbitMQ application
./rabbitmqctl -n rabbit@beren cluster_status -> check the cluster status should show something like this:

Cluster status of node rabbit@beren ...
[{nodes,[{disc,[rabbit@beren]},{ram,[rabbit2@beren]}]},
{running_nodes,[rabbit2@beren,rabbit@beren]},
{partitions,[]}]
...done.

 
3- to change node type from RAM to DISC do:

./rabbitmqctl -n rabbit2@beren stop_app
./rabbitmqctl -n rabbit2@beren change_cluster_node_type disc
./rabbitmqctl -n rabbit2@beren start_app

 
to change it back to RAM do:

./rabbitmqctl -n rabbit2@beren stop_app
./rabbitmqctl -n rabbit2@beren change_cluster_node_type ram
./rabbitmqctl -n rabbit2@beren start_app

Note that at least the cluster should have one disc node. Ram nodes don't offer faster response in queue performance but only offer better speed on queue creation, recommended to have at least 1 disc node on every VM to allow high availability and add ram nodes in a vertical cluster as decscribed above IF needed.

Create Horizontal clustering


To do this we need a second VM running RabbitMQ.
on the new 2nd node do the following.
1- copy the erlang cookie file from the old VM RabbitMQ user home directory to the new VM user home directory.
cookie file name is .erlang.cookie

2- go to RABBITMQ_HOME/sbin & start rabbitMQ

./rabbitmq-server -detached

3- Run the below list of commands:

./rabbitmqctl -n rabbit@vardamir stop_app
./rabbitmqctl -n rabbit@vardamir join_cluster rabbit@beren
./rabbitmqctl -n rabbit@vardamir start_app


All the queues and topics created will automatically be transfered to the new joining node.
This will create a 2 node on 2 VM cluster that will offer true HA for the rabbitMQ. queues can be configured to work in HA mode by creating a policy from the rabbitMQ admin console, the policy should containt ha-mode:all so that the queues are all working across the cluster in HA.
Also replication can be enabled in the same manner.

HAproxy config


HA proxy can be used to offer true load-balancing for a rabbitMQ cluster.
The cluster doesn't offer a common gateway by default so to benefit from it we need balancer sitting infornt of it.
In this case haproxy can help as it is very light weight and a performer.
The config for balancing 2 nodes should look like this:

# Simple configuration for an HTTP proxy listening on port 8080 on all
# interfaces and forwarding requests to a single backend "servers" with a
# single server "server1" listening on 127.0.0.1:8000

    global
    log 127.0.0.1 local0 info
    daemon
   
    defaults
    log     global
    mode     tcp
    option tcplog
    option dontlognull
    retries 3
    option redispatch
    maxconn    2000
    timeout connect 10s
    timeout client     120s
    timeout    server     120s
   
    listen rabbitmq_local_cluster 0.0.0.0:8080
   
    balance roundrobin
    server rabbitnode1    beren:5672 check inter 5000 rise 2 fall 3 #Disk Node
    server rabbitnode2 vardamir:5672 check inter 5000 rise 2 fall 3 #Disk Node
   
   
    listen private_monitoring  :8101
    mode http
    option httplog
    stats enable
    stats uri /stats
    stats refresh  30s


The above makes use of haproxy tcp mode since rabbitMQ uses amqp which is not http compatible.