Multiple resque workers mode creating extra processes - resque

I need to start 4 resque workers so i used following command
bundle exec rake environment resque:workers RAILS_ENV=production COUNT=4 QUEUE=* VERBOSE=1 PIDFILE=tmp/pids/resque_worker.pid >> log/resque_worker_QUEUE.log
But going to web interface, it was actually starting 8 workers. There were two parent processes with 4 child processes each. Following is tree view of the processess:
ruby /code_base/bundle/ruby/1.9.1/bin/rake environment resque:workers RAILS_ENV=production COUNT=4 QUEUE=* VERBOSE=1 PIDFILE=tmp/pids/resque_worker.pid
\_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
\_ [ruby]
ruby /code_base/bundle/ruby/1.9.1/bin/rake environment resque:workers RAILS_ENV=production COUNT=4 QUEUE=* VERBOSE=1 PIDFILE=tmp/pids/resque_worker.pid
\_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
| \_ [ruby]
\_ resque-1.15.0: Waiting for *
\_ [ruby]
Couldn't figure out what is causing extra process to start?

You don't want to use the COUNT=n option in production, as it runs each worker in a thread instead of a separate process - which is much less stable.
Official Resque docs:
Running Multiple Workers
At GitHub we use god to start and stop multiple workers. A sample god configuration file is included under examples/god. We recommend this method.
If you'd like to run multiple workers in development mode, you can do so using the resque:workers rake task:
$ COUNT=5 QUEUE=* rake resque:workers
This will spawn five Resque workers, each in its own process. Hitting ctrl-c should be sufficient to stop them all.
Here's the example God monitoring/configuration file that ships with Resque to run multiple processes, and here's an example for monit.

Related

Process ID of nohup process continually updating - cant kill process

I am trying to kill a nohup process in an EC2 instance but so far have been unsuccessful. I am trying to grab the process ID (PID) and then use it with the kill command in terminal, like so:
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
with columns, (I believe) they're:
UID PID PPID C STIME TTY TIME CMD
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
However, each time I try to kill the process, I get an error saying that the PID doesn't exist, seemingly because the PID changed. Here is a sequence I am running into in my command line:
// first try, grab the PID and kill
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-172-31-41-213 ~]$ kill 16580
-bash: kill: (16580) - No such process
// ?? - check for correct PID again, and try to kill again
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16583 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-172-31-41-213 ~]$ kill 16583
-bash: kill: (16583) - No such process
// try 3rd time, kill 1 PID up
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16584 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-myip ~]$ kill 16585
-bash: kill: (16585) - No such process
This is quite a struggle for me right now, since I need to kill/restart this nohup process. Any help is appreciated!
EDIT - I tried this approach to killing the process because it was posted as an answer in this thread (Prevent row names to be written to file when using write.csv) and was the 2nd highest rated answer.
Very very bad question ...
You are trying to kill you grep process...
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
The command is grep --color=auto nohup
I'm not sure you can kill nohup
nohup will run your command in a particular way. But after its launching, the nohup process dies.
If you want to grep the ps output :
ps -ef | grep '[n]ohup'
or
pgrep -fl nohup
because you are trying to kill not nohup pid but the grep itself...

Using GNU Parallel and rsync with passwords?

I have seen GNU parallel with rsync, unfortunately, I cannot see a clear answer for my use case.
As part of my script I have this:
echo "file01.zip
file02.zip
file03.zip
" | ./gnu-parallel --line-buffer --will-cite \
-j 2 -t --verbose --progress --interactive \
rsync -aPz {} user#example.com:/home/user/
So, I run the script, and as a part of its output, once it gets to the gnu-parallel step, I get this (because I have --interactive, I get prompted to confirm each file:
rsync -aPz file01.zip user#example.com:/home/user/ ?...y
rsync -aPz file02.zip user#example.com:/home/user/ ?...y
Computers / CPU cores / Max jobs to run
1:local / 4 / 2
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:2/0/100%/0.0s
... and then, the process just hangs here and does nothing; no numbers change or anything.
At this point, I can do from another terminal this:
$ ps axf | grep rsync
12754 pts/1 S+ 0:00 | | \_ perl ./gnu-parallel --line-buffer --will-cite -j 2 -t --verbose --progress --interactive rsync -aPz {} user#example.com:/home/user/
12763 pts/1 T 0:00 | | \_ rsync -aPz file01.zip user#example.com:/home/user/
12764 pts/1 R 0:11 | | | \_ ssh -l user example.com rsync --server -logDtprze.iLs --log-format=X --partial . /home/user/
12766 pts/1 T 0:00 | | \_ rsync -aPz file02.zip user#example.com:/home/user/
12769 pts/1 R 0:10 | | \_ ssh -l user example.com rsync --server -logDtprze.iLs --log-format=X --partial . /home/user/
... and so I can indeed confirm that processes have been started, but they are apparently not doing anything. As to confirmation that they are not doing anything (as opposed to uploading, which they should be doing in this case), I ran the monitor sudo iptraf, and it reported 0 Kb/s for all traffic on wlan0, which is the only one I have here.
The thing is - the server where I'm logging in to, accepts only SSH authentication with passwords. At first I thought --interactive would allow me to enter the passwords interactively, but instead it prompts the user about whether to run each command line and read a line from the terminal. Only run the command line if the response starts with 'y' or 'Y'.. So ok, above I answered y, but it doesn't prompt me for a password afterwards, and it seems the processes are hanging there waiting for it. My version is "GNU parallel 20160422".
$ ./gnu-parallel --version | head -1
GNU parallel 20160422
So, how can I use GNU parallel, to run multiple rsync tasks with passwords?
Use sshpass:
doit() {
rsync -aPz -e "sshpass -p MyP4$$w0rd ssh" "$1" user#example.com:/home/user
}
export -f doit
parallel --line-buffer -j 2 --progress doit ::: *.zip
The fundamental problem with running interactive programs in parallel is: which program should get the input if two programs are ready for input? Therefore GNU Parallel's --tty implies -j1.

Why docker exec is killing nohup process on exit?

I have running docker ubuntu container with just a bash script inside. I want to start my application inside that container with docker exec like that:
docker exec -it 0b3fc9dd35f2 ./main.sh
Inside main script I want to run another application with nohup as this is a long running application:
#!/bin/bash
nohup ./java.sh &
#with this strange sleep the script is working
#sleep 1
echo `date` finish main >> /status.log
The java.sh script is as follow (for simplicity it is a dummy script):
#!/bin/bash
sleep 10
echo `date` finish java >> /status.log
The problem is that java.sh is killed immediately after docker exec returns. The question is why?
The only solution I found out is to add some dummy sleep 1 into the first script after nohup is started. Than second process is running fine. Do you have any ideas why it is like that?
[EDIT]
Second solution is to add some echo or trap command to java.sh script just before sleep. Than it works fine. Unfortunately I cannot use this workaround as instead of this script I have java process.
This is not an answer, but I still don't have the required reputation to comment.
I don't know why the nohup doesn't work. But I did a workaround that worked, using your ideas:
docker exec -ti running_container bash -c 'nohup ./main.sh &> output & sleep 1'
Okay, let's join two answers above :D
First rcmgleite say exactly right: use
-d
options to run process as 'detached' background.
And second (the most important!) if you run detached process, you don't needed nohup!
deploy_app.sh
#!/bin/bash
cd /opt/git/app
git pull
python3 setup.py install
python3 -u webui.py >> nohup.out
Execute this inside a container
docker exec -itd container_name bash -c "/opt/scripts/deploy_app.sh"
Check it
$ docker attach container_name
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11768 1940 pts/0 Ss Aug31 0:00 /bin/bash
root 887 0.4 0.0 11632 1396 pts/1 Ss+ 02:47 0:00 /bin/bash /opt/scripts/deploy_app
root 932 31.6 0.4 235288 32332 pts/1 Sl+ 02:47 0:00 python3 -u webui.py
I know this is a late response but I will add it here for documentation reasons.
When using nohup on bash and running it with 'exec' on a docker container, you should use
$ docker exec -d 0b3fc9dd35f2 /bin/bash -c "./main.sh"
The -d option means:
-d, --detach Detached mode: run command in the
background
for more information about docker exec, see:
https://docs.docker.com/engine/reference/commandline/exec/
This should do the trick.

How can I tell many unicorn workers are running on a heroku dyno right now?

I know that you can look in config/unicorn.rb (or the equivalent) and see what those settings are, but I'm wondering specifically how I can tell, right now, how many unicorn workers are running on a given dyno.
I tried to ps aux after running 'heroku run bash' but that didn't give me the actual processes the dyno was running.
If you run:
$ heroku run bash
$ unicorn -c config/unicorn.rb &
$ ps euf
you should get something similar to this:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
u16236 2 0.0 0.0 19444 2024 ? S 20:55 0:00 bash GOOGLE_ANALYTICS_ID=XXX HEROKU_POSTGRESQL_COPPER_URL=postgres://XXX:
u16236 3 19.4 0.3 288716 131568 ? Sl 20:55 0:04 \_ unicorn master -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 5 31.0 0.3 305844 129636 ? Sl 20:55 0:04 | \_ sidekiq 3.2.5 app [0 of 2 busy] GOOGLE_ANALYTICS_
u16236 7 0.0 0.3 288716 124724 ? Sl 20:55 0:00 | \_ unicorn worker[0] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 10 0.0 0.3 288716 124728 ? Sl 20:55 0:00 | \_ unicorn worker[1] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 13 0.0 0.3 288716 124728 ? Sl 20:55 0:00 | \_ unicorn worker[2] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 30 0.0 0.0 15328 1104 ? R+ 20:55 0:00 \_ ps euf GOOGLE_ANALYTICS_ID=XXX DEVISE_PEPPER=XXX
You can see that processes 7, 10, & 13 are my 3 Unicorn workers that are each consuming 30% of total memory.

change user owner of process on Mac/Linux?

I have a program that is running as root. This app calls another program (processA) to run. When processA is running, it is owned by root but I want owner of it to be the current user logged on. How to do it?
Well it's a little bit tricky... Depends if it's a daemon (service) or you run this command/app.
For the 2nd case you can use "su" command.
Here's a short example.
1. I create o simple script with following content (it will sleep in background for 100 seconds and will output the process list coresponding to this script):
#!/bin/bash
sleep 100 &
ps faux | grep test.sh
2. I run the "su" command like this (I'm currently logged in as "root" and I want to run this script as "sandbox" user):
su - sandbox -c ./test.sh
sandbox = the username that will run this command.
-c ./test.sh = means it will execute this command
3. Output (first column = the user that owns this process):
root#i6:/web-storage/sandbox# su - sandbox -c ./test.sh
sandbox 18149 0.0 0.0 31284 1196 pts/0 S+ 20:13 0:00 \_ su - sandbox -c ./test.sh
sandbox 18150 0.0 0.0 8944 1160 pts/0 S+ 20:13 0:00 \_ /bin/bash ./test.sh
sandbox 18155 0.0 0.0 3956 644 pts/0 S+ 20:13 0:00 \_ grep test.sh
root#i6:/web-storage/sandbox#
I hope it will help,
Stefan

Resources