I have a server which is running Unicorn and nginx. I've just done a git pull to get the most recent updates to my own code, and now I need to restart it.
In setting it up, I followed this guide, and did these steps:
Start Unicorn
cd /var/www/my_server
unicorn -c unicorn.rb -D
Restart nginx
service nginx restart
I now need to know how to restart it. Ideally, it should be a quick process, so that my server doesn't have any/much downtime when doing this in the future.
EDIT: I tried a few other things as suggested elsewhere, such as killall ruby, and rebooting my server. Now I'm at a point where I've done the above, it doesn't give me any errors, but when I try and load a page, it doesn't respond, and likely eventually times out (though I didn't leave it that long). If I stop nginx, it says "connection refused", so it's obvious that nginx is working, but for some reason it's not able to connect to Unicorn.
EDIT: On a whim, I typed in just unicorn and it seems to be having an issue with my project - missing gems. Makes sense. So that first edit is no longer an issue, I'm still interested in the most elegant way of restarting it.
You can try send a HUP signal to the master process, doing
kill -HUP <processID>
HUP - reloads config file and gracefully restart all workers.
If the "preload_app" directive is false (the default), then workers
will also pick up any application code changes when restarted. If
"preload_app" is true, then application code changes will have no
effect; USR2 + QUIT (see below) must be used to load newer code in
this case. When reloading the application, +Gem.refresh+ will
be called so updated code for your application can pick up newly
installed RubyGems. It is not recommended that you uninstall
libraries your application depends on while Unicorn is running,
as respawned workers may enter a spawn loop when they fail to
load an uninstalled dependency.
If you want to read more about unicorn's signals you can read more here
I would recommend to use the init script they provide
One example init script is distributed with unicorn:
http://unicorn.bogomips.org/examples/init.sh
because there you have the upgrade method which takes care of your
use case, restarting the app with zero downtime.
...
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $old_pid && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $old_pid
then
echo >&2 "$old_pid still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
...
On Ubuntu 14.04 try
sudo service unicorn restart
If you are using capistrano
add this in deploy.rb
desc "Zero-downtime restart of Unicorn"
task :restart, :except => { :no_release => true } do
run "kill -s USR2 unicorn_pid"
end
Related
I'm trying to write a bash script.
The script should check if the MC server is running. If it crashed or stopped it will start the server automatically.
I'll use crontab to run the script every minute. I think I can run it every second it won't stress the CPU too much. I also would like to know when was the server restarted. So I'm going to print the date to the "RestartLog" file.
This is what I have so far:
#!/bin/sh
ps auxw | grep start.sh | grep -v grep > /dev/null
if [ $? != 0 ]
then
cd /home/minecraft/minecraft/ && ./start.sh && echo "Server restarted on: $(date)" >> /home/minecraft/minecraft/RestartLog.txt > /dev/null
fi
I'm just started learning Bash and I'm not sure if this is the right way to do it.
The use of cron is possible, there are other (better) solutions (monit, supervisord etc.). But that is not the question; you asked for "the right way". The right way is difficult to define, but understanding the limits and problems in your code may help you.
Executing with normal cron will happen at most once per minute. That means that you minecraft server may be down 59 seconds before it is restarted.
#!/bin/sh
You should have the #! at the beginning of the line. Don't know if this is a cut/paste problem, but it is rather important. Also, you might want to use #!/bin/bash instead of #!/bin/sh to actually use bash.
ps auxw | grep start.sh | grep -v grep > /dev/null
Some may suggest to use ps -ef but that is a question of taste. You may even use ps -ef | grep [s]tart.sh to prevent using the second grep. The main problem however with this line is that that you are parsing the process-list for a fairly generic start.sh. This may be OK if you have a dedicated server for this, but if there are more users on the server, you run the risk that someone else runs a start.sh for something completely different.
if [ $? != 0 ]
then
There was already a comment about the use of $? and clean code.
cd /home/minecraft/minecraft/ && ./start.sh && echo "Server restarted on: $(date)" >> /home/minecraft/minecraft/RestartLog.txt > /dev/null
It is a good idea to keep a log of the restarts. In this line, you make the execution of the ./start.sh dependent on the fact that the cd succeeds. Also, the echo only gets executed after the ./start.sh exists.
So that leaves me with a question: does start.sh keep on running as long as the server runs (in that case: the ps-test is ok, but the && echo makes no sense, or does start.sh exit while leaving the minecraft-server in the background (in that case the ps-grep won't work correctly, but it makes sense to echo the log record only if start.sh exits correctly).
fi
(no remarks for the fi)
If start.sh blocks until the server exists/crashes, you'd be better off to simply restart it in an infinite loop without the involvement of cron. Simply type in a console (or put into another script):
#!/bin/bash
cd /home/minecraft/minecraft/
while sleep 3; do
echo "$(date) server (re)start" >> restart.log
./start.sh # blocks until server crashes
done
But if it doesn't block (i.e. if start.sh starts the server and then returns, but the server keeps running), you would need to implement a different check to verify if the server is actually still running, other than ps|grep start.sh
PS: To kill the infinite loop you have to Ctrl+C twice: Once to stop ./start.sh and once to exit from the immediate sleep.
You can use monit for this task. See docu. It is available on most linux distributions and has a straightforward config. Find some examples in this post
For your app it will look something like
check process minecraftserver
matching "start.sh"
start program = "/home/minecraft/minecraft/start.sh"
stop program = "/home/minecraft/minecraft/stop.sh"
I wrote this answer because sometimes the most efficient solution is already there and you don't have to code anything. Also follow the suggestions of William Pursell and use the init system of your OS (systemd,upstart,system-v,etc.) to host your scripts.
Find more:
Shell Script For Process Monitoring
I'm creating a pallet crate for elasticsearch. I was stuck on the service not starting however after looking at the logs it seems that it's not really anything to do with pallet. I am using the elasticsearch apt package for 1.0 which includes an init script. If I run sudo service elasticsearch start then ES starts with no problems. If pallet does this for me then it records standard out as having started it successfully
start elasticsearch
* Starting Elasticsearch Server
...done.
However it is not started.
sudo service elasticsearch status
* elasticsearch is not running
I messed around with the init script and I found if I added sleep 1 after starting the daemon then it works correctly with pallet.
start-stop-daemon --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
#this sleep will allow it to work
#sleep 1
log_end_msg $?
I don't understand what is going on?
I've seen issues like this before, too. It generally comes down to expecting something to have finished before the script finishes, which may not always happen with services since they fork off background tasks that may still get killed when the ssh connection is terminated.
For these kinds of things you should use Pallet's built in code for running things under supervision. This also has the advantage of making it very easy to switch from plain init.d to runit or daemontools later, which is especially useful for Elasticsearch because it's a JVM process and nearly any JVM will eventually crash if you let it run long enough.
I am trying to make a surveillance camera turn on after a specified amount of time after I leave.
I imagined it to be something like this:
sleep 60 && sudo service motion start &> /dev/null &
but the background task seems to get deleted when I log out. Is there a way to make it stick around even after I leave? AND also to have the root permissions?
EDIT:
Okay I ended up making a script that does it instead of using a single command, it looks about like this:
#!/bin/bash
if (( $UID > 0 )); then
echo "Only root can run this script"
exit 1
fi
if [ -z $1 ]; then
TIMEOUT=30
else
TIMEOUT=$1
fi
sleep $TIMEOUT && service motion start &>/dev/null &
exit 0
use nohup:
sudo nohup service motion start &
In addition to nohup, you can also use screen:
screen
sudo service motion start
Then type CTRL+a then d to detach the screen.
To reattach, you can simply use:
screen -x
For more information:
man screen
There are 2 issues here:
Executing something after you log out. (I guess you are planning to put it in ~/.bash_logout, which is a good idea)
Executing it under sudo, which may be difficult.
Below are the common approaches:
nohup sudo ( sleep 60; service motion start ) &
#Has issues, if sudo authentication is not complete till this point, since background process cannot prompt you for sudo password. Better to call `sudo true` or `sudo echo -n` or similar dumb command, under sudo before calling the actual command.
#You may be tempted to call sudo only for service command, instead of entire command.
However, there is a risk that if sleep interval is long enough, then sudo token may expire.
echo service motion start | sudo at now+1min
#The problem with this command is that the granularity is minute level, not seconds level. Otherwise, this looks clean to me.
Based on your requirement, you can choose either of them, or any other mechanism mentioned in the other answers.
I'm trying to set up a script to execute tests for my node.js program, which uses MongoDB. The idea is that I want to have a script I can run that:
Starts the MongoDB process, forked as a daemon
Pre populates the database with some test data
Starts my node server with forever, so it runs as a daemon
Run my tests
Drop the test data from the database
I have a crude script that performs all these steps. My problem is that MongoDB takes a variable amount of time to set up, which results in sleep calls in my script. Consequently it only works occasionally.
# the directory of this script
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# launch mongodb.
$DIR/../../db/mongod --fork --logpath ./logs/mongodb.log --logappend --dbpath ./testdb/ --quiet
# takes a bit of time for the database to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# avoid EADDRINUSE errors because existing node servers are up.
killall node &> /dev/null
# start up our node server using a test database.
forever start $DIR/../main.js --dbname=testdb --logpath=test/logs/testlog.log
# takes a bit of time for node to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# run any database setup code (inject data for testing)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/setup.js --quiet
# actually run the tests
node $DIR/tests.js
# kill the servers (this could be a little less heavy handed...)
killall node &> /dev/null
killall forever &> /dev/null
# finally tear down the database (drop anything we've added to the test db)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/teardown.js --quiet
# and then shut mogodb down
kill -2 `ps ax | grep mongod | grep -v grep | awk '{print $1}'`
What is the best way to go about what I'm trying to do? Am I going down a rabbit hole here, or am I missing something in the MongoDB docs?
Ask yourself what your purpose of your testing is: Is it to test the actual DB connection in your code, or focus on whether your code handles and processes data from the DB correctly?
Assuming your code is in Javascript, if you strictly want to test that your code logic is handling data correctly, and are using a MongoDB wrapper object class (i.e. Mongoose), one thing you may be interested to add to your workflow is the creation and running of spec tests using the Jasmine test suite.
This would involve writing test-code, mocking-up test data as javascript objects. Yes, that means any actual data from the DB itself will not be involved in your spec tests. Since after all, your primary purpose is to test if your code is logically working right? It's your project, only you know the answer :)
If your main problem is how to find out when mongod actually starts why don't you write a script which will tell you that ?
For example you can write a until loop and check if the client can connect properly on the mongo server, based on the return value. For example instead of using sleep 1 use something like that:
isMongoRunning=-1
until [[ ${isMongoRunning} -eq 0 ]]; do
sleep 1
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/empty.js --quiet
isMongoRunning=${?}
done
This loop will end only after mongodb start.
Also of you would like to improve the stopping of mongodb add --pidfilepath to your mongod execution line so you can easily find which process to terminate.
I'm a little confused about my deploy strategy here, when deploying under what circumstances would I want to send a reload signal to unicorn? For example in my case it would be like:
sudo kill -s USR2 `cat /home/deploy/apps/my_app/current/tmp/pids/unicorn.pid`
I've been deploying my apps by killing that pid, then starting unicorn again via something like:
bundle exec unicorn -c config/unicorn/production.rb -E production -D
I'm just wondering why I'd want to use reload? Can I gain any performance for my deployment by doing so?
When you kill unicorn you cause downtime, until unicorn can start back up. When you use the USR2 signal, unicorn starts new workers first, then once they are running, it kills the old workers. It's basically all about removing the need to "turn off" unicorn.
Note, the assumption is that you have the documented before_fork hook in your unicorn configuration, in order to handle the killing of the old workers, should an ".oldbin" file be found, containing the PID of the old unicorn process:
before_fork do |server, worker|
# a .oldbin file exists if unicorn was gracefully restarted with a USR2 signal
# we should terminate the old process now that we're up and running
old_pid = "#{pids_dir}/unicorn.pid.oldbin"
if File.exists?(old_pid)
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end