I ran the external python script by system(run_command)
But I want to get the pid of the running python script,
So I tried to use fork and get the pid,
But the returned pid was the pid of the fork's block, not the python process.
How could I get the pid of the python process, thanks.
arguments=[
"-f #{File.join(#public_path, #streaming_verification.excel.to_s)}",
"-duration 30",
"-output TEST_#{#streaming_verification.id}"
]
cmd = [ "python",
#automation[:bin],
arguments.join(' ')
]
run_command = cmd.join(' ').strip()
task_pid = fork do
system(run_command)
end
(Update)
I tried to use the spawn method.
The retuned pid was still not the pid of the running python process.
I got the pid 5177 , but the actually pid,I wanted, is 5179
run.sh
./main.py -f ../tests/test_setting.xls -o testing_`date +%s` -duration 5
sample.rb
cmd = './run.sh'
pid = Process.spawn(cmd)
print pid
Process.wait(pid)
According to Kernel#system:
Executes command… in a subshell. command… is one of following forms.
You will get pid of subshell, not the command.
How about using Process#Spwan? It returns the pid of the subprocess.
run_command = '...'
pid = Process.spawn(cmd)
Process.wait(pid)
Related
I want to spawn a potentially long running process and save its PID so I can kill it later if needed.
The problem is that the PID I get from spawn is not the same I get from Process.pid (in the child process). The result of Process.pid matches what I see running ps in the command line.
Here is the code for parent and child
parent.rb
puts "Hi parent"
pid = spawn "ruby child.rb 10 &"
puts pid
Process.detach(pid)
sleep 3
puts "Bye parent"
child.rb
puts "Start pid #{Process.pid}"
sleep ARGV.first.to_i
puts "End pid #{Process.pid}"
Output
Hi parent
1886789
Start pid 1886791
Bye parent
End pid 1886791
Looks like the background job operator (&) is causing the intermediate process 1886789. When I remove the background job operator, I get the following output:
Hi parent
93185
Start pid 93185
Bye parent
End pid 93185
So long story short, I'm trying to run a linux perl script in Windows ( with few modifications ).
On Unix it works just fine, but on Windows I come to the conclusion that calling for system doesn't work the same as on Unix and so it doesn't create multiple processes.
Below is the code :
use strict;
use warnings;
open (FIN, 'words.txt'); while (<FIN>) {
chomp;
my $line = $_;
system( "perl script.pl $line &" );
}
close (FIN);
So basically, I have 5 different words in "words.txt" which I want each and every one to be used one by one when calling for script.pl , which means :
word1 script.pl
word2 script.pl
word3 script.pl
etc
As of now it opens just the first word in words.txt and it loops with that one only. As I said, on Unix it works perfectly, but not on Windows.
I've tried to use "start" system( "start perl script.pl $line &" ); and it works...except it opens 5 additional CMDs to do the work. I want it to do the work on the same window.
If anyone has any idea how this can work on window, i'll really appreciate it.
Thanks!
According to perlport :
system
(Win32) [...] system(1, #args) spawns an external process and
immediately returns its process designator, without waiting for it to
terminate. Return value may be used subsequently in wait or waitpid.
Failure to spawn() a subprocess is indicated by setting $? to 255 <<
8. $? is set in a way compatible with Unix (i.e. the exit status of the subprocess is obtained by $? >> 8, as described in the
documentation).
I tried this:
use strict;
use warnings;
use feature qw(say);
say "Starting..";
my #pids;
for my $word (qw(word1 word2 word3 word3 word5)) {
my $pid = system(1, "perl script.pl $word" );
if ($? == -1) {
say "failed to execute: $!";
}
push #pids, $pid;
}
#wait for all children to finish
for my $pid (#pids) {
say "Waiting for child $pid ..";
my $ret = waitpid $pid, 0;
if ($ret == -1) {
say " No such child $pid";
}
if ($? & 127) {
printf " child $pid died with signal %d\n", $? & 127;
}
else {
printf " child $pid exited with value %d\n", $? >> 8;
}
}
say "Done.";
With the following child script script.pl :
use strict;
use warnings;
use feature qw(say);
say "Starting: $$";
sleep 2+int(rand 5);
say "Done: $$";
sleep 1;
exit int(rand 10);
I get the following output:
Starting..
Waiting for child 7480 ..
Starting: 9720
Starting: 10720
Starting: 9272
Starting: 13608
Starting: 13024
Done: 13608
Done: 10720
Done: 9272
Done: 9720
Done: 13024
child 7480 exited with value 9
Waiting for child 13344 ..
child 13344 exited with value 5
Waiting for child 17396 ..
child 17396 exited with value 3
Waiting for child 17036 ..
child 17036 exited with value 6
Waiting for child 17532 ..
child 17532 exited with value 8
Done.
Seems to work fine..
You can use Win32::Process to get finer control over creating a new process than system gives you on Windows. In particular, the following doesn't create a new console for each process like using system("start ...") does:
#!/usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
# Older versions don't work with an undef appname argument.
# Use the full path to perl.exe on them if you can't upgrade
use Win32::Process 0.17;
my #lines = qw/foo bar baz quux/; # For example instead of using a file
my #procs;
for my $line (#lines) {
my $proc;
if (!Win32::Process::Create($proc, undef, "perl script.pl $line", 1,
NORMAL_PRIORITY_CLASS, ".")) {
$_->Kill(1) for #procs;
die "Unable to create process: $!\n";
}
push #procs, $proc;
}
$_->Wait(INFINITE) for #procs;
# Or
# use Win32::IPC qw/wait_all/;
# wait_all(#procs);
As Yet Another Way To Do It, the start command takes a /b option to not open a new command prompt.
system("start /b perl script.pl $line");
I am launching a nohup remote script with Ruby Net/SSH.
Net::SSH.start(ip_address, user, options) do |ssh|
script = File.join(remote_path, 'run_case.py')
cmd = "nohup python #{script} #{args} < /dev/null &"
ssh.exec(cmd)
end
All stdout and stderr is saved to a file on the remote machine.
Is it possible to get the PID of the remote script so that I can kill it if needed?
EDIT 1:
I have modified the script as suggested.
Net::SSH.start(ip_address, user, options) do |ssh|
script = File.join(remote_path, 'run_case.py')
cmd = "nohup python #{script} #{args} < /dev/null & echo $! > save_pid.txt"
ssh.exec(cmd)
pid = ssh.exec!("cat save_pid.txt")
puts mesh_pid
end
It complains that it cannot find the file. Is this because the file does not exist yet? I would prefer to avoid any sleep command if possible
EDIT 2: Maybe this is not elegant but it works. I have created a second ssh session and used pgrep.
Net::SSH.start(ip_address, user, options) do |ssh|
script = File.join(remote_path, 'run_case.py')
cmd = "nohup python #{script} #{args} < /dev/null &"
ssh.exec(cmd)
end
Net::SSH.start(ip_address, user, options) do |ssh|
cmd = "python #{script} #{args}"
mesh_pid = ssh.exec!("pgrep -f '#{cmd}'")
puts mesh_pid
end
You should be able to determine the PID (and store it in a file) as follows:
Net::SSH.start(ip_address, user, options) do |ssh|
script = File.join(remote_path, 'run_case.py')
cmd = "nohup python #{script} #{args} < /dev/null & echo $! > save_pid.txt"
ssh.exec(cmd)
end
In a script, $! represents the PID of the last process executed. If you need to kill the process, you can do it via:
kill -9 `cat save_pid.txt`
rm save_pid.txt
I am trying to write a simple python server that runs and kills another python script. The problem I am having is that the kill command gets issued without error but doesn't kill the process. I have tried manually with 'kill -INT pid' without any result.
The command works in shell but not python. I am doing a 'soft' kill because the script controls a light and a 'kill -9' doesn't turn off the light.
NOTE: the script is running on yocto-distr
import socket, subprocess
srv = socket.socket()
srv.bind(('', 1340))
srv.listen(3)
while 1:
connection, address = srv.accept()
data = int(connection.recv(1024))
if data == 0:
ps_id = subprocess.check_output('ps|grep python\ /home/root/python/backlight_mod.py', shell=True)
ps_id = ps_id.split(' ')[2]
subprocess.call('kill -INT ' + str(ps_id), shell=True)
print 'Terminated'
elif data == 1:
subprocess.call('python ~/python/backlight_mod.py &', shell=True)
connection.close()
The output from 'kill -l':
HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH IO PWR SYS RTMIN RTMIN+1 RTMIN+2 RTMIN+3 RTMIN+4 RTMIN+5 RTMIN+6 RTMIN+7 RTMIN+8 RTMIN+9 RTMIN+10 RTMIN+11 RTMIN+12 RTMIN+13 RTMIN+14 RTMIN+15 RTMAX-14 RTMAX-13 RTMAX-12 RTMAX-11 RTMAX-10 RTMAX-9 RTMAX-8 RTMAX-7 RTMAX-6 RTMAX-5 RTMAX-4 RTMAX-3 RTMAX-2 RTMAX-1 RTMAX
Consider os.kill as suggested by #Petesh so your code would look something like this:
if data == 0:
ps_id = subprocess.check_output('ps|grep python\ /home/root/python/backlight_mod.py', shell=True)
ps_id = ps_id.split(' ')[2]
os.kill(pid, 3)
print 'Terminated'
Also consider the following to extract pid:
p = subprocess.Popen(['ps', '-A'], stdout = subprocess.PIPE)
out, err = p.communicate()
for process in out.splitlines():
if 'backlight_mod.py' in process:
pid = int(process.split(None, 1)[0])
os.kill(pid, 3)
I am experimenting with multiple processes. I am trapping SIGCLD to execute something when the child is done. It is working on IRB but not when I execute as a ruby script.
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
When I run the above from IRB, I both lines are printed but when I run it as a ruby script, the line within the trap procedure does not show up.
IRB gives you an outer loop, which means that the ruby process doesn't exit until you decide to kill it. The problem with your ruby script is that the main process is finishing and killing your child (yikes) before it has the chance to trap the signal.
My guess is that this is a test script, and the chances are that your desired program won't have the case where the parent finishes before the child. To see your trap working in a plain ruby script, add a sleep at the end:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 3
To populate the $? global variable, you should explicitly wait for the child process to exit:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code #{$? >> 8}" }
Process.wait
If you do want the child to run after the parent process has died, you want a daemon (double fork).
When you run your code in IRB, the main thread belongs to IRB so that all the stuff you’ve called is living within virtually infinite time loop.
In a case of script execution, the main thread is your own and it dies before trapping. Try this:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 5 # this is needed to prevent main thread to die ASAP
Hope it helps.