Using godrb with delayed_job pools - ruby

I want to start a delayed_job using multiple pools (4) each pool can take a certain type of jobs and one last for everything (in order to prioritize quick tasks and let longer run on separate pools).
This is the command ran :
/bin/bash -c 'cd /usr/src/app; /usr/bin/env RAILS_ENV=development /usr/src/app/bin/delayed_job --pool=LongTask,MediumTask:2 --pool=QuickTask --pool=*:1 start'
I would like to monitor this with god, but how can I monitor forked processes with godrb ? I would like to keep the process generatin in the delayed_jobs if possible, instead of running 4 god instances each with one delayed_jobs pool.

Related

Make a script that starts and shutdown both redis and sidekiq

I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman

How do we avoid Cron jobs interruption?

I am new at using Cron Jobs on Google Cloud: I was wondering if it is possible to launch a job on an instance and have it run continuously without interruption even after I shut down my local (Laptop). Is it possible to have a job running without any ssh connection?
The CronJobs are a possibility, but they are not meant to be used in your scenario, but when you want to run a command with a certain frequency over the time.
A Bash Builtin command that suits better your needs is disown. First, run your process/script in the background (using &, or stopping it with ^Z and then restarting with bg):
$ long_operation_command &
[1] 1156
Note that at this point the process is still linked to the session and in case it is closed it will be killed.
You can the process attached to the session check running jobs in the background:
$ jobs
[1]+ Running long_operation_command
Therefore you can run disown in order to detach the processes from the session:
$ disown
You can confirm this checking the result of your script or command logging in again or checking with top the process still running.
Check also this because it could be interesting, i.e. the difference between nohup foo, foo & and $ foo & disown
P.S.
The direct answer to your question is yes, the cronjobs run even if you shutdown your laptop/shutdown the session.

What limits performance of worker?

I use Sidekiq for job processing. Using Foreman, I set up six processes in my Procfile:
redirects: bundle exec sidekiq -c 10 -q redirects
redirects2: bundle exec sidekiq -c 10 -q redirects
redirects3: bundle exec sidekiq -c 10 -q redirects
redirects4: bundle exec sidekiq -c 10 -q redirects
redirects5: bundle exec sidekiq -c 10 -q redirects
redirects6: bundle exec sidekiq -c 10 -q redirects
These processes performed at about 1600+ jobs (simple job to increment some hashes in Redis) per second with all 10 threads busy most of the time. I scaled my Digital Ocean droplet from 8 to 12-core, and performance fell to ~400 per second. For each process, there are only 3-5 busy threads out of 10.
What I did to try to fix the issue:
Make perform method empty
Use less/more process count
Use less/more concurrency
Split queue to server-specific queues (there are three express.js clients from another servers puts jobs in queues)
Trying with different hz values in redis.conf
Setting somaxconn to 1024 (tcp-backlog in redis.conf too)
Turning off RDB save and use only AOF
Flush all Redis dbs (there are two databases for all that logic: one for sidekiq and another for hashes in my workers)
Running sidekiq from terminal without Foreman (to check if it is Foreman issue)
None of above helped me. What could have caused the performance loss?

start and end shellscript for multiple programs

Following problem:
3 programs:
one Java application which is started via a existing sh script
one node application
one grunt server
I want to write 2 shell scripts, the first should start all 3 programs. The second should end them. For the first script I simply call the starting commands. But for the second, which should be a standalone script(as the first should be), I have to know all process Ids for killing them. But even if I know these Ids, what if they started sub processes. I would just kill these parent processes, wouldn't I?
What's the approach here?
Thanks in advance!
Try pkill -P -KILL [parentid]. This should kill processes with the designated parent ID.

How can I create a monit process for a Ruby program?

I have these rake tasks that will occasionally fail. I want to use monit to monitor them and to restart them if necessary.
I have read the other ruby/monit threads on StackOverflow. My case is different in that these programs require my Rails environment in order to work. That's why I have them as rake tasks now.
Here is one of the tasks I need to monitor, in it's entirety:
task(process_updates: :environment) do
`echo "#{Process.pid}" > #{Rails.root}/log/process_alerts.pid`
`echo "#{Process.ppid}" > #{Rails.root}/log/process_alerts.ppid`
SynchronizationService::process_alerts
end
My question is, do I leave this as a rake task, since SynchronizationService::process_alerts requires the Rails environment to work? Or is there some other wrapper I should invoke and then just run some *.rb file?
Monit can check for running pid, since you're creating pid when you run task you can create a monit config which should look something like this:
check process alerts with pidfile RAILSROOT/log/process_alerts.pid
start program = "cd PATH_TO_APP; rake YOURTASK" with timeout 120 seconds
alert your#mail.com on { nonexist, timeout }
Of course RAILSROOT, PATH_TO_APP, YOURTASK should correspond to your paths/rake task.
Monit then will check for running process in system using the pidfile value and will start the process using start program command if it can't find running process.

Resources