Get Riak to start with chef - bash

I need help getting Riak to work with Chef.
Currently every time I chef an amazon box with Riak 1.4.8 using the default basho riak cook book I have to manually ssh into the machine kill -9 the beam.smp process then rm -rf /var/lib/riak/ring then I can finally do sudo riak start and it will work.
Prior to that I get:
Node 'riak#' not responding to pings.
I have even created a shell script:
#!/bin/bash
# Generated by Chef for <%= #node[:fqdn] %>
#<%= #node[:ec2][:local_ipv4] %>
# This script should be run by root.
riak stop
riakPid="/var/run/riak/riak.pid"
if [ -e "$riakPid" ]; then
kill -9 $(<${riakPid})
fi
rm -f /var/run/riak/*
rm -f /var/lib/riak/ring/*
riak start
And Chef says:
bash[/etc/riak/clearOldRiakInfo.sh] ran successfully
For the above script.
If I manually run that script everything works fine. Why is this not cheffing properly.
UPDATE:
This has been solved by creating a script to delete the ring directory when the machine gets cheffed.
This would only happen when I would create a new machine from scratch as the fqdn would get set correctly after Riak had started and created the ring. If I manually went on the box and deleted the ring then it would rechef perfectly fine. So I have to create the script so that the very first chef run on the machine would clean out the ring info.

Given the error message you provided, Riak is not starting because the Erlang node name is not being generated correctly. The Erlang node name configuration exists within vm.args and is produced by the node['riak']['args']['-name'] attribute.
The default for node['riak']['args']['-name'] is riak##{node['fqdn']}. Please check the value Ohai is reporting for node['fqdn']. Alternatively, if you are overriding this attribute somewhere else, ensure that produces a valid value for -name.
A more detailed description of -name within vm.args can be found here.

Related

Run an shell script on startup (not login) on Ubuntu 14.04

I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.

how to change the DCOS attributes without restarting slave?

I am facing the problem to add/change attributes of the slave machines in the DCOS environment.
After changing attributes in
vi /var/lib/dcos/mesos-slave-common
MESOS_ATTRIBUTES=TYPE:DB;DB_TYPE:MONGO;
file, it not immediately getting updated in the cluster.
I have to run the following commands
systemctl stop dcos-mesos-slave
rm -f /var/lib/mesos/slave/meta/­slaves/latest
systemctl start dcos-mesos-slave
This means essentially I have to restart the service in the slave.
And the slave is down for at least 1 hour,
Is there any other way achieve this?
As variant we are using some hack, we create /var/lib/dcos/mesos-slave-common file and "froze" it by changing access right, like:
echo "MESOS_ATTRIBUTES=TYPE:DB;DB_TYPE:MONGO;" | sudo tee /var/lib/dcos/mesos-slave-common
sudo chmod -w /var/lib/dcos/mesos-slave-common
# And after that you can execute node installation. Ugly, but that is working :)
sudo dcos_install.sh slave

Why can't I execute systemctl commands as superuser?

I wrote a script to download and install kubernetes on an ubuntu machine.
The last part of the script would be to start the kubelet service.
echo "Initializing the master node"
kubeadm reset
systemctl start kubelet.service
kubeadm init
I am forcing the user to run the script as root user. However, when the script reaches the systemctl command, it is not able to execute it. Moreover, I tried to execute the command manually as the root user. I was not able to do so. However, I am able to execute it as a regular user.
Does anyone know why? Is there a workaround?
A possible workaround is to start the service as a regular user, even though the script runs as root. First, you need to find out who is the "original" user:
originalUser="$(logname 2>/dev/null)"
and then call the service as this user:
su - "$originalUser" -c "systemctl start kubelet.service"
Maybe that specific service is dependent on being run by an user who is not root (some programs test for that).

Script to clone/snapshot Docker Containers including their Data?

I would like to clone a dockerized application including all its data, which uses three containers in this example: 1) a web application container such as a CMS, 2) a database container and 3) a data-volume container (using docker volumes).
With docker-compose, I can easily create identical instances of these containers with just the initial data. But what, if I want to clone a set of running containers on the same server, including all its accumulated data, in a similar way as I would clone a KVM container? With KVM I would suspend or shutdown the VM, clone with something like virt-clone and then start the cloned guest, which has all the same data as the original.
One use case would be to create a clone/snapshot of a running development web-server before making major changes or before installing new versions of plugins.
With Docker, this does not seem to be so straightforward, as data is not automatically copied together with its container. Ideally I would like to do something simple like docker-compose clone and end up with a second set of containers identical to the first, including all their data. Neither Docker nor docker-compose provides a clone command (as of version 1.8), thus I would need to consider various approaches, like backing up & restoring the data/database or using a third party tool like Flocker.
Related to this is the question on how to do something similar to KVM snapshots of a dockerized app, with the ability to easily return to a previous state. Preferably the cloning, snapshotting and reverting should be possible with minimal downtime.
What would be the preferred Docker way of accomplishing these things?
Edit: Based on the first answer, I will make my question a little more specific in order to hopefully arrive at programmatic steps to be able to do something like docker-compose-clone and docker-compose-snapshot using a bash or python script. Cloning the content of the docker volumes seems to be the key to this, as the containers themselves are basically cloned each time I run docker-compose on the same yaml file.
Generally my full-clone script would need to
duplicate the directory containing the docker-compose file
temporarily stop the containers
create (but not necessarily run) the second set of containers
determine the data-volumes to be duplicated
backup these data-volumes
restore the data-volumes into the cloned data container
start the second set of containers
Would this be the correct way to go about it and how should I implement this? I'm especially not sure on how to do step 4 (determine the data-volumes to be duplicated) in a script, as the command docker volume ls will only be available in Docker 1.9.
How could I do something similar to KVM snapshots using this approach? (possibly using COW filesystem features from ZFS, which my Docker install is already using).
With docker you would keep all of your state in volumes. Your containers can be recreated from images as long as they re-use the same volumes (either from the host or a data-volume container).
I'm not aware of an easy way to export volumes from a data-volume container. I know that the docker 1.9 release is going to be adding some top-level apis for interacting with volumes, but I'm not sure if export will be available immediately.
If you're using a host volume, you could manage the state externally from docker.
Currently, I'm using the following script to clone a dockerized CMS web-application Concrete5.7, based on the approach outlined above. It creates a second set of identical containers using docker-compose, then it backs up just the data from the data volumes, and restores it to the data containers in the second set.
This could serve as an example for developing a more generalised script:
#!/bin/bash
set -e
# This script will clone a set of containers including all its data
# the docker-compose.yml is in the PROJ_ORIG directory
# - do not use capital letters or underscores for clone suffix,
# as docker-compose will modify or remove these
PROJ_ORIG="c5app"
PROJ_CLONE="${PROJ_ORIG}003"
# 1. duplicate the directory containing the docker-compose file
cd /opt/docker/compose/concrete5.7/
cp -Rv ${PROJ_ORIG}/ ${PROJ_CLONE}/
# 2. temporarily stop the containers
cd ${PROJ_ORIG}
docker-compose stop
# 3. create, run and stop the second set of containers
# (docker-compose does not have a create command)
cd ../${PROJ_CLONE}
docker-compose up -d
docker-compose stop
# 4. determine the data-volumes to be duplicated
# a) examine which containers are designated data containers
# b) then use docker inspect to determine the relevant directories
# c) store destination directories & process them for backup and clone
#
# In this appliaction we use two data containers
# (here we used DATA as part of the name):
# $ docker-compose ps | grep DATA
# c5app_DB-DATA_1 /true Exit 0
# c5app_WEB-DATA_1 /true Exit 0
#
# $ docker inspect ${PROJ_ORIG}_WEB-DATA_1 | grep Destination
# "Destination": "/var/www/html",
# "Destination": "/etc/apache2",
#
# $ docker inspect ${PROJ_ORIG}_DB-DATA_1 | grep Destination
# "Destination": "/var/lib/mysql",
# these still need to be determined manually from examining
# the docker-compose.yml or using the commands in 4.
DATA_SUF1="_WEB-DATA_1"
VOL1_1="/etc/apache2"
VOL1_2="/var/www/html"
DATA_SUF2="_DB-DATA_1"
VOL2_1="/var/lib/mysql"
# 5. Backup Data:
docker run --rm --volumes-from ${PROJ_ORIG}${DATA_SUF1} -v ${PWD}:/clone debian tar -cpzf /clone/clone${DATA_SUF1}.tar.gz ${VOL1_1} ${VOL1_2}
docker run --rm --volumes-from ${PROJ_ORIG}${DATA_SUF2} -v ${PWD}:/clone debian tar -cpzf /clone/clone${DATA_SUF2}.tar.gz ${VOL2_1}
# 6. Clone Data:
# existing files in volumes need to be deleted before restoring,
# as the installation may have created additional files during initial run,
# which do not get overwritten during restore
docker run --rm --volumes-from ${PROJ_CLONE}${DATA_SUF1} -v ${PWD}:/clone debian bash -c "rm -rf ${VOL1_1}/* ${VOL1_2}/* && tar -xpf /clone/clone${DATA_SUF1}.tar.gz"
docker run --rm --volumes-from ${PROJ_CLONE}${DATA_SUF2} -v ${PWD}:/clone debian bash -c "rm -rf ${VOL2_1}/* && tar -xpf /clone/clone${DATA_SUF2}.tar.gz"
# 7. Start Cloned Containers:
docker-compose start
# 8. Remove tar archives
rm -v clone${DATA_SUF1}.tar.gz
rm -v clone${DATA_SUF2}.tar.gz
Its been tested and works, but still has the following limitations:
the data-volumes to be duplicated need to be determined manually and
the script needs to be modified, depending on the number of data-containers and data-volumes
there is no snap-shot/restore capability
I welcome any suggestions for improvements (especially step 4.). Or, if someone would come up with a different, better approach I would accept that as an answer instead.
The application used in this example, together with the docker-compose.yml file can be found here.
On Windows, there is a port of docker's open source container project available from Windocks that does what you need. There are two options:
Smaller sized databases are copied into containers via an Add database command specified while building the image. After that every container built from that receives the database automatically.
For large databases, there is a cloning functionality. The databases are cloned during the creation of containers and the clones are done in seconds even for terabyte size DBs. Deleting a container also removes the clone automatically. Right now its only available for SQL Server though.
See here for more details on the database adding and cloning.

Monit + RVM + Thin on OSX / Linux

After trying for hours (and also trying God and Bluepill) I decided to ask my question here because I am completely clueless how to solve this issue.
I have a Rails app. I want to use Thin as my app server. I want to use Monit to monitor my Thin instances. I use RVM to manage my Ruby versions as my local user.
I have the following monit file set up that would assumably do what I want it to do, but doesn't:
check process thin-81
with pidfile /Users/Michael/Desktop/myapp/tmp/pids/thin.81.pid
start program = "/Users/Michael/.rvm/gems/ruby-1.9.2-p180/bin/thin start -c /Users/Michael/Desktop/myapp -e production -p 81 -d -P tmp/pids/thin.81.pid"
stop program = "/Users/Michael/.rvm/gems/ruby-1.9.2-p180/bin/thin stop -c /Users/Michael/Desktop/myapp -P tmp/pids/thin.81.pid"
if totalmem is greater than 150.0 MB for 2 cycles then restart
If I simply copy/paste the start program in to the command line (outside of Monit) it works. Same goes for the stop program to afterwards stop the Thin instance. Running it via Monit however, does not seem to work.
Running it in -v verbose mode yields the following:
monit: pidfile '/Users/Michael/Desktop/myapp/tmp/pids/thin.81.pid' does not exist
Which leads me to believe that Thin never initializes. Does Monit run as root or something? Cause if it does then it obviously won't have the correct gems installed since I'm using RVM and not the "system" Ruby. I am currently on OSX (but will deploy to Linux eventually) - does anyone know what the cause of this might be? And if Monit is run via root, how could I make it use RVM regardless? Or could I tell Monit to execute the start/stop programs as Michael:staff (I assume it would be on OSX?)
Any help is much appreciated!
monit clears out the environment and also doesn't run a shell for your command (let alone an interactive one). I find I have to do something like:
/usr/bin/bash -c 'export rvm_path=/home/foo/.rvm; . $rvm_path/scripts/rvm; cd my_ruby_app_path; $rvm_path/bin/rvm rvmrc load; ./my_ruby_app'
as the monit start command.
another option which I found in the RVM google group is as follows:
start program = "/bin/su - myuser -c '/path/to/myscript.rb start' "
su - user runs the user's shell as a login shell, so if the
user's shell is bash, it will cause ~/.bash_profile to be run so the
environment variables should be the same as just after that user
logged in.
We need the path for su, otherwise, monitrc would not able to find the su executable.
A better way would be to use an RVM wrapper to create a custom executable for thin. It will create the correct environment variables to use the right ruby and gems, and then launch thin. Read more about it using it with god here : https://rvm.io/integration/god/. It should work the same with monit
To create the wrapper:
rvm wrapper ruby#gemset bootup thin
Then change start program and stop program to use the executable you just created.

Resources