We actually use God in our development environment, as well as in production, simply because it makes managing unicorn/resque etc simpler.
I've just scaled down our default unicorn config to a single worker in dev, as most of the time this is enough. However, I've added shell scripts wrapping the commands:
# Add an extra unicorn worker
kill -TTIN `cat /path/to/unicorn.pid`
# Remove a unicorn worker
kill -TTOU `cat /path/to/unicorn.pid`
Rather than starting/stopping/restarting unicorn with god, but adding new workers through an ad-hoc shell script, is there a way to have God support a couple of custom commands, like:
god incr unicorn
god decr unicorn
I looked in the documentation and found nothing, but it feels like something it would probably 'unofficially' be able to do.
Related
I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman
I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.
My Rails app on Heroku has a Procfile that starts up a single Sidekiq process on my dyno
worker: bundle exec sidekiq -C config/sidekiq.yml
Is it possible to start up multiple sidekiq processes on the same dyno?
My organization has a large dyno with a lot of memory that we're not using. Before downgrading the dyno, I was wondering if it was an option instead to make use of it by running multiple Sidekiq processes.
Sidekiq Enterprise's Multi-Process feature makes this trivial.
One option would be to run Sidekiq with a supervisor like https://github.com/ochinchina/supervisord
Add the binary to your repository (e.g. bin/supervisord) and add a config file and a Procfile entry.
For a dyno with 8 cores, your configuration could look like this:
[program:sidekiq]
command = bundle exec sidekiq -e ${RACK_ENV:-development} -C config/sidekiq_large.yml
process_name = %(program_name)s_%(process_num)s
numprocs = 8
numprocs_start = 1
exitcodes = 0
stopsignal = TERM
stopwaitsecs = 40
autorestart = unexpected
stdout_logfile = /dev/stdout
stderr_logfile = /dev/stderr
Then in your Procfile:
worker_large: env [...] bin/supervisord -c sidekiq_worker_large.conf
Make sure that you tune your Sidekiq concurrency settings.
There is also this open source third party gem that is meant to do this, without having to pay for Sidekiq Enterprise. I have not used it, don't know how well it works, but it looks good.
https://github.com/vinted/sidekiq-pool
It doesn't have any recent commits or many github stars, but may just be working fine?
Here's another one, that warns you it's unfinished work in progress, but just to compare:
https://github.com/kigster/sidekiq-cluster
I have a long-running command (sidekiq, if you must know) that depends on another long-running processes (redis-server, as you may have guessed from the previous parenthetical).
I'd like to write a Bash (well, okay, Zsh actually) alias to start redis-server in the background, then run sidekiq and, when I use ctrl-C to interrupt sidekiq, to kill the background Redis job. If it's relevant, I'm on a Mac and only need to support OS X.
So what I'm looking for is something like:
redis-server & ; sidekiq ; kill $!
Unfortunately, my interrupt of the sidekiq command also prevents the kill from occurring. Is there any way to do this?
Bonus points if this can be a one-liner alias and not a function. Double bonus points if I don't have to write to any files in advance (like turning on the daemonize flag in /usr/local/etc/redis.conf).
Maybe this:
#!/bin/zsh
redis-server &
redispid=$!
trap 'kill $redispid' INT
sidekiq
I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of:
`/path/to/daemon1 &`
`/path/to/daemon2 &`
`/path/to/daemon3 &`
However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8.
I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit.
So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)?
As long as you are working on a POSIX OS you can use fork and exec.
fork = Create a subprocess
exec = Replace current process with another process
You then need to inform that your main-process is not interested in the created subprocesses via Process.detach.
job1 = fork do
exec "/path/to/daemon01"
end
Process.detach(job1)
...
better way to pseudo-deamonize:
`((/path/to/deamon1 &)&)`
will drop the process into it's own shell.
best way to actually daemonize:
`service daemon1 start`
and make sure the server/user has permission to start the actual daemon. check out 'deamonize' tool for linux to set up your deamon.