Running bash script from ruby not producing the correct pid - ruby

I am developing a ruby framework to run different jobs and one of the things that I need to do is to know when these jobs have ended in order to used their outputs and organize everything. I have been using it with no problem but some colegues are starting to use it in different system and something really odd is happening. What I do is run the commands using
i,o,e,t = Open3.popen3(job.get_cmd)
p = t.pid
and later I check if the job has ended like this:
begin
Process.getpgid(p)
rescue Errno::ESRCH
# The process ended
end
It works perfectly in the system I am running (Scientifi linux 6) but when a friend of mine started running on Ubuntu 14.04 (using ruby 1.9.3p484) and the command is a concatenation of commands such as cmd1 && cmd2 && cmd3 each command is run at the same time by the system, not one after the other, and the pid returned by t.pid is neither of the pids of the different processes being run.
I modified the code and instead of running the concatenation of cammands it creates a script with all the command inside the command called from popen3 is just Open3.popen3("./script.sh") but the behaviour is the same... All the commands are run at the same time and the pid that ruby knows is not any of the processes pid...
I am not sure if this is something ruby related but since running that script.sh by hand behaves as expected, running one command after the other, it seems that either ruby is not launching the process accordingly or the system is not reading the process as it should. Do you know what might be happening?
Thanks a lot!
EDIT:
The command being run looks like this
./myFit.exe h vlq.config &> output_h.txt && ./myFit.exe d vlq.config &> output_d.txt && ./myFit.exe p vlq.config &> output_p.txt
This command, if run by hand and not inside the ruby script runs perfectly, exactly this command. When run from the ruby script it runs at the same time all the myFit.exe executions (but I want them to be run withh && becasue I want them running if the previous works fine). Myfit.exe is a tool which makes a fit, is not a system command. Again, this command, if run by hand runs perfeclty.

Related

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

Start a process and keep it running after the ruby script exits

I'm trying to write a ruby script that:
Run a command/script
Stores the command's process pid in a file so I can check if it's still running later, and
the command should keep running after the ruby code exits.
I'm successful in steps 1 and 2, but it looks like the started script (i.e, the child process) terminates once the ruby code is finished.
This is the last version of what I could think about (super simplified):
pid = fork do
exec "/my/fancy/daemon/style/script"
end
File.open('tmp/process.pid', 'w') { |file| file.write(pid.to_s) }
Can you please tell me what am I doing wrong? The ultimate goal is to keep the other script (i.e, the child process) running after the ruby code exits.
You can "detach" your child process:
Process.detach(pid)
See Process#detach for more info.
If you're running your script on a shell, and if your script is the last interactive process, your virtual terminal may exit and cause your child process to hangup as well. If you consider not sending output to the terminal, you can use Process.daemon before running exec.
See Process#daemon.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

BASH: Shell Script as Init Script

I have a shell script that calls a java jar file and runs an application. There's no way around this, so I have to work with what I have.
When you execute this shell script, it outputs the application status and just sits there (pretty much a console); so when something happens to the program it updates the screen. This is like with any normal non daemonized/backgrounded process. Only way to get out of it is ctrl-c, which then ends the process altogether. I do know that I could get around this by doing path_to_shell_script/script.sh &, which would background it for my session (I could use nohup if I wanted to logout).
My issue is, I just don't know how to put this script into a init script. I have most of the init script written, but when I try to daemonize it, it doesn't work. I've almost got it working, however, when i run the initscript, it actually spans the same "console" on the script, and just sits there until i hit ctrl-c. Here's the line in question:
daemon ${basedir}/$prog && success || failure
The problem is that I can't background just the daemon ${basedir}/$prog part and I think that's where I'm running into the issue. Has anyone been successful at creating an init script FOR a shell script? Also this shell script is not daemonizable (you can background it, but the underlying program does not support a daemonize option, or else I would have just let the application do all the work).
You need to open a subshell to execute it. It also help to redirect its output to a file, or at least /dev/null.
Something like:
#!/bin/bash
(
{ daemon ${basedir}/$prog && success || failure ; } &>/dev/null
) &
exit 0
It work as follows ( list ) & in a background subshell. { list } is a group command, it's used here to capture all the output of your commands and send it to /dev/null.
I have had success with initially detached screen sessions for running things like the half life server and my custom "tail logfile " bash scripts.
To start something in the background:
screen -dmS arbitarySessionName /path/to/script/launchService.sh
To look at the process:
# screen -r arbitrarySessionName
Hope you find this useful, gl!

Spawn a background process in Ruby

I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of:
`/path/to/daemon1 &`
`/path/to/daemon2 &`
`/path/to/daemon3 &`
However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8.
I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit.
So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)?
As long as you are working on a POSIX OS you can use fork and exec.
fork = Create a subprocess
exec = Replace current process with another process
You then need to inform that your main-process is not interested in the created subprocesses via Process.detach.
job1 = fork do
exec "/path/to/daemon01"
end
Process.detach(job1)
...
better way to pseudo-deamonize:
`((/path/to/deamon1 &)&)`
will drop the process into it's own shell.
best way to actually daemonize:
`service daemon1 start`
and make sure the server/user has permission to start the actual daemon. check out 'deamonize' tool for linux to set up your deamon.

Resources