How do I call Ruby script from a shell script? - ruby

I am trying to write a watchdog for a Ruby application. So far, I have a cron job which is successfully calling a shell script:
#!/bin/sh
if ps -ef | grep -v grep | grep adpc.rb ; then
exit 0
else
NOW=$(date +"%m-%d-%Y"+"%T" )
echo "$NOW - CRITIC: ADPC service is down! Trying to initialize..." >> che.log
cd lib
nohup ruby adpc.rb &
exit 0
fi
This code runs correctly from command line, but I am not able to make the shell script execute the Ruby script when called from a cron job.
Please any help would be appreciated.
The Ruby file has +x permissions.
The nohup.out file is empty.
Solution: replace bare "ruby" command with full path (which ruby output).
Thanks to all for the replies =)

This is usually caused by an incorrect environment. Check the Ruby output in the created nohup.out file and log the stderr of nohup itself to a file.
It's frequently solved by starting the script with:
#!/bin/bash
source ~/.bash_profile
source ~/.bashrc
This will ensure that you run with bash instead of sh, and that any settings like PATH you've configured in your init files will be set.

Related

Run bash script loop in background which will write result of jar command to file

I'm novice to running bash script. (you can suggest me, if title I've given is incorrect.)
I want to run a jar file using bash script in loop. Then it should write the output of jar command into some file.
Bash file datagenerate.sh
#!/bin/bash
echo Total iterations are 500
for i in {1..500}
do
the_output="$(java -jar data-generator.jar 10 1 mockData.csv data_200GB.csv)"
echo $the_output
echo Iteration $i processed
done
no_of_lines="$(wc -l data_200GB.csv)"
echo "${no_of_lines}"
I'm running above script using command nohup sh datagenerate.sh > datagenerate.log &. As I want to run this script in background, so that even I log out from ssh it should keep running & output should go into datagenerate.log.
But when I ran above command and hit enter or close the terminal it ends the process. Only Total iterations are 500 is getting logged into output file.
Let me know what I'm missing. I followed following two links to create above shell script: link-1 & link2.
nohup sh datagenerate.sh > datagenerate.log &
nohup should work this way without using screen program, but depending on your distro your sh shell might be linked to dash.
Just make your script executable:
chmod +x datagenerate.sh
and run your command like this:
nohup ./datagenerate.sh > datagenerate.log &
You should check this out:
https://linux.die.net/man/1/screen
With this programm you can close your shell while a command or script is still running. They will not be aborted and you can pick the session up again later.

Shell script run in telnet well, but not via cron

I just started to learn shells scripting.
Trying to run script in cron, without success. In telnet when i run "sh.script.sh" output is as expected. What I miss?
script.sh is in /usr/bin folder.
#!/bin/sh
var1 = $(opkg update)
echo ${var1}
try simply this:
#!/usr/bin/env bash
var1=$(opkg update)
echo $var1

shell script : write sdterr & sdtout to file

I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"

How to source a csh script from inside a bash script

My default shell is bash. I have set some environment variables in my .bashrc file.
I installed a program which use .cshrc file. It contains the path to several cshell scripts.
When I run the following commands in the shell windows it works perfectly :
exec csh
source .cshrc
exec bash
I have tried to put these commands in bash script, unfortunately it didn't work.
is there another way to write a script in order to get the same result as running commands from a shell windows.
I hope my question is now clear
Many thanks for any help
WARNING : don't put the following script in your .bashrc, it will reload bash and so reload .bashrc again and again (stopable with C-c anyway)
Use preferable this script in your kit/CDS stuff startup script. (cadence presumably)
WARNING 2 : if anything in your file2source fails, the whole 'trick' stops.
Call this script : cshWrapper.csh
#! /bin/csh
# to launch using
# exec cshWrapper.csh file2source.sh
source $1
exec $SHELL -i
and launch it using
exec ./cshWrapper.csh file2source.sh
it will : launch csh, source your file and came back to the same parrent bash shell
Example :
$> ps
PID TTY TIME CMD
7065 pts/0 00:00:02 bash
$>exec ./cshWrapper.csh toggle.csh
file sourced
1
$> echo $$
7065
where in my case i use the file toggle.csh
#! /bin/csh
# source ./toggle.csh
if ! $?TOGGLE then
setenv TOGGLE 0
endif
if ($?TOGGLE) then
echo 'file sourced'
if ($TOGGLE == 0) then
setenv TOGGLE 1
else
setenv TOGGLE 0
endif
endif
echo $TOGGLE
Hope it helps
New proposal, since I faced another problem with exec.
exec kills whatever remains in the script, except if you force a fork by using a pipe after it `exec script |cat'. In such case if you have environment variable in the script, they are not spread back to the script itself, which is not what we want. The only solution I found is to use 3 files (let's call them for the example : main.bash that call first.cshrc and second.sh).
#! /bin/bash
#_main.bash_
exec /bin/csh -c "source /path_to_file/cshrc; exec /bin/bash -i -c /path_to_file/second.sh"
# after exec nothing remains (like Attila the Hun)
# the rest of the script is in 'second.sh'
With that manner, i can launch in a single script call, an old cshrc design kit, and still process some bash command after, and finally launch the main program in bash (let say virtuoso)

running linux executables from linux shell scripts

I want to run a executable from my shell script. The executable is located at /usr/bin/to_run.
My shell script(which is calling the above executable) is in the /usr/bin folder.
The shell script is :
#!/bin/bash
#kill all existing instances of synergy
killall synergys
sh "/usr/bin/synergys"
if [ $? -eq 1 ]; then
echo "synergy server started"
else
echo "error in starting"
fi
I am getting an error saying : "synergys : no process found".
When I run the same thing - /usr/bin/synergys directly from the terminal it runs fine, but from within a script there are problems. I don't understand why.
Thank you in advance.
That error is from the killall command, it's saying there are no candidate processes matching your argument.
If you don't want to be notified where no processes match, just use the quiet option:
killall -q synergys
From the killall man page:
-q, --quiet
Do not complain if no processes were killed.
If /usr/bin/synergys is an executable and not a shell script, you will run it directly, not via the shell:
/usr/bin/synergys
Or, since /usr/bin is on the $PATH of most people, you could simply write:
synergys
If /usr/bin/synergys is actually a shell script, it should be executable (for example, 555 or -r-xr-xr-x permissions), and you can still write just synergys to execute it. You only need to use an explicit sh if the file /usr/bin/synergys is not executable and is a shell script.

Resources