Published by Sherif Abdelfattah.
A DevOps engineer; personal blog, to help solve day to day problems and help document and share solutions and hints for DevOps professionals
We have been using yaml for puppet config for quite a while now. With configs getting bigger and more complex, we decided to move all the puppet hiera config to be stored in a db rather than yaml files.
An issue emerged is that the yaml config is too complex to be escaped by standard shell script/awk tools, it can be done but would be too complex and might not be fool proof.
Instead, I used python to do the job.
python should be able to parse
the yaml file and actually insert the data for us into a table.
Here is a script below that does
the key and value inserts:
[root@Vardamir ~]# cat
pyparser.py
import yaml import sys import psycopg2
try: conn = psycopg2.connect("dbname='hiera' user='hiera' host='localhost' password='hiera'") except: print "I am unable to connect to the database"
cur = conn.cursor()
with open(sys.argv[1] , 'r') as stream: try: doc=yaml.load(stream)
for yamlkey, yamlvalue in doc.iteritems(): if str(yamlvalue).find("[")<>-1 : txt=str(yamlvalue).replace("\"","\\\"").replace("'","\"") else: txt="\""+str(yamlvalue)+"\"" cur.execute("""INSERT INTO keyval(key,val) VALUES ( %s , %s )""", (yamlkey,txt ) )
conn.commit()
except yaml.YAMLError as exc: print(exc)
[root@Vardamir ~]#
To run this just; pass the yaml
file as a parameter:
#python pyparser.py act.yaml
For this to work we need to
install some packages for python:
docker.io
docker.io/lancehudson/docker-mysql MySQL
is a widely used, open-source relati... 0
[OK]
docker.io
docker.io/livingobjects/mysql MySQL
0 [OK]
docker.io
docker.io/nanobox/mysql
MySQL service for nanobox.io
0
[OK]
docker.io docker.io/projectomakase/mysql
Docker image for MySQL
0
[OK]
docker.io
docker.io/tozd/mysql
MySQL (MariaDB fork) Docker image.
0
[OK]
docker.io
docker.io/vukor/mysql
Build for MySQL. Project available on
http... 0
[OK]
[root@fingolfinstock_apache_docker]#
Also
there is a useful link to how to setup a local registry:
https://docs.docker.com/registry/deploying/
1- Dockerrm:
dockerrm is used to remove the docker containers, for this to work we need to list all the
containers that are created in our docker system:
to do this use:
dockerps -a |cut -d" " -f1|tail
-n +2
then to remove all the containers run this one line
script:
dockerrm `dockerps -a |cut -d" " -f1|tail -n +2`
This will remove all the containers :)
2- Dockerrmi:
This
command is used to remove dockerimages,i am using this to get red
of the temp and unused ones to save disk space:
[root@fingolfinstock_apache_docker]#
docker images
REPOSITORY
TAG
IMAGE ID
CREATED
VIRTUAL SIZE
508a8098b89d 17 minutes
ago 282.3 MB
68cb22da5789 41 minutes
ago 282.3 MB
fa8600453d4f 8 hours
ago 282.3 MB
1324b4fad9fc 8 hours
ago 282.3 MB
eb9d45dd81cd 8 hours ago
282.3 MB
cbf901a1a88e 8 hours
ago 282.3 MB
b16a988c6668 8 hours
ago 282.3 MB
e2ac7f0940f9 8 hours
ago 282.3 MB
e74dccbf3e7d 24 hours
ago 310.8 MB
b8cae09ad3bf 25 hours
ago 196.7 MB
docker.io/ubuntu
latest
8444cb1ed763 2 days
ago 122 MB
docker.io/centos centos6
d487f1b804de 12 days
ago 194.5 MB
docker.io/centos
latest
8c59c0a396b7 12 days
ago 196.7 MB
[root@fingolfinstock_apache_docker]#
This is
a sample command to run an apache container and map container port 80 to the
host port 9090.
Also we
use volumes feature of docker to mount host volumes
on the container for the apache htdocs /var/www/http and logs at /var/log/httpd:
command:
docker run -p 9090:80 -v /root/docker_stage/stock_apache_docker/html:/var/www/html
-v /root/docker_stage/stock_apache_docker/logs:/var/log/httpd 68cb22da5789
you can put any content on the host folder and docker apache will pick it up. also
you can collect the apache logs from the host folder with a simple shell script
!
Another
form of docker run is to create a shell and actually
execute commands on the docker container affecting
the image directly, this needs to allocate a terminal
-t and be interactive -i as below:
[root@fingolfin
logs]# docker run -t -i centos:centos6 /bin/bash
Usage of loopback devices is strongly discouraged for production use. Either
use `--storage-opt dm.thinpooldev` or use
`--storage-opt dm.no_warn_on_loop_devices=true` to
suppress this warning.
[root@204005540b83 /]# hostname
204005540b83
[root@204005540b83 /]# cat /etc/redhat-release CentOS release 6.7 (Final)
[root@204005540b83 /]# exit
[root@fingolfin logs]#
4- Dockerbuild:
Docker build
will process a prefineddocker
file and execute the steps in it one after the other using containers resulting
from set n-1 to do step n.
This
makes it easy to create custom containers as needed.
Note
that in the dockerfile we need to suppress all user
interaction and also make sure that the entrypoint
and or CMD commands are running in foreground, if they run as a service
something else would need to block the container from exiting, say using a tail
-f on a none rotating log . .
Also note the difference between RUN and ENTRYPOINT/CMD, run will execute its
command in build time and its action will take effect at build time, in this
case its a yum install, it could also be an mkdir or running any setup script that affects the image.
ENTRYPOINT/CMD will not run at build time, instead, it will run when a
container is initialized from the image, say
when we use a docker run command. ENTRYPOINT should
only appear once and thus most of the time holds the purpose of the image, in
our case running apache.
CMD can
appear multiple times to run commands and provide arguments to the entry point
command.
It’s always better to use 1 container per piece of software, makes things much
easier.
One more thing to add to this post, is we need the satellite client node to have epel insalled and also to have nagios plugins installed so that icinga can do its thing.
to do this do:
This is a script building on the previous post describing how to run remote commands from puppet master using mcollective.
The script uses the mco command to control Tomcat.
this can be called by either jenkins or rundeck to automate starting and stopping tomcat.
The motive behind making this is 2 fold:
1- There is no simple way to restart nodes in puppet, you need to change puppet code and run puppet agent multiple times to achieve this.
2- In my case, using puppet for recycling Tomcat relies on using init scripts, this doesn't set a current working directory and causes issues to the apps deployed in my Tomcat, so i need to always use the tcruntime from bin directory to have the PWD set correctly.
The script manages to achieve this and still is inline with my puppet strategy.
[root@vardamir]# cat restart_tc_mco.sh for OPT in $* do #echo ${OPT} #Look for the taget env if [ -n "`echo ${OPT}|grep 'env='`" ] then #echo ${OPT} ENV=`echo ${OPT}|cut -d"=" -f2` #echo ${ENV}
#look for the TC instance_name elif [ -n "`echo ${OPT}|grep 'tc='`" ] then TC_NAME_LIST=`echo ${OPT}|cut -d"=" -f2` #echo ${WAR}
else echo "Worng command line parameters"
fi done
DEPLOY_DESC=/apps/topo/deploy_topology_${ENV}.dat
for TC_NAME in `echo ${TC_NAME_LIST}|tr "," " "` do
for NODE in `cat $DEPLOY_DESC|grep -v "^#"|grep ${TC_NAME}|cut -d":" -f2` do echo "Restarting TC ${TC_NAME} on Node ${NODE}" #Lots of escaping done to have this work:
The script starts with setting up a couple of parametes, an env and a list of comma separated TC instances.
Then the script uses a topology file to link which node has which TC.
Once we know the TC instance name and the node we can construct the mco command and then run it with peadmin user as above.
the echo to generate the mco command is actually working by utilizing multiple string outputs that are stacked one after the other and pushed to a file which is executed by peadmin using su - (assuming root runs the command ).
This will allow the mco command to use the exec
resource, any other resource that needs to be enables needs to be comma
separatly listed in the same manner.
Once this change is done on the node running
puppet agent we need to restart the pe-mcollecive service on that node.
A good approach is to handle this as a puppet
module, to manage the puppet.cfg and to notify the pe-mcollecive service if the
file changes.
To start using command from Puppet master,
login to the puppet user peadmin we run like this: