running linux executables from linux shell scripts - shell

I want to run a executable from my shell script. The executable is located at /usr/bin/to_run.
My shell script(which is calling the above executable) is in the /usr/bin folder.
The shell script is :
#!/bin/bash
#kill all existing instances of synergy
killall synergys
sh "/usr/bin/synergys"
if [ $? -eq 1 ]; then
echo "synergy server started"
else
echo "error in starting"
fi
I am getting an error saying : "synergys : no process found".
When I run the same thing - /usr/bin/synergys directly from the terminal it runs fine, but from within a script there are problems. I don't understand why.
Thank you in advance.

That error is from the killall command, it's saying there are no candidate processes matching your argument.
If you don't want to be notified where no processes match, just use the quiet option:
killall -q synergys
From the killall man page:
-q, --quiet
Do not complain if no processes were killed.

If /usr/bin/synergys is an executable and not a shell script, you will run it directly, not via the shell:
/usr/bin/synergys
Or, since /usr/bin is on the $PATH of most people, you could simply write:
synergys
If /usr/bin/synergys is actually a shell script, it should be executable (for example, 555 or -r-xr-xr-x permissions), and you can still write just synergys to execute it. You only need to use an explicit sh if the file /usr/bin/synergys is not executable and is a shell script.

Related

$? from bash script command executed by TCL (open pipe) on windows returns wrong value

I've got tcl script with two ways of execution bash script:
#exec bash ./run.sh
open "|bash ./run.sh r"
The bash script is shown below:
#!/bin/bash
ls
if [ "$?" != "0" ]; then
echo "ERROR: Status failed!" > status
else
echo "Everything is OK!" > status
fi
I'm using tclsh for Windows with bash from git bash. When I use:
exec bash ./run.sh
I've got in status file:
Everything is OK!
otherwise:
open "|bash ./run.sh r"
got:
ERROR: Status failed!
Is there any possibility to correctly detect exit code when opened the tcl pipe?
You don't describe whether you get different results out of the ls part of the script. That matters; the ls command is most certainly capable of changing its behaviour according to the environment in which it is invoked. This matters because Tcl executes subprocesses (on Windows) directly using the CreateProcess() system call, rather than the various wrapped versions that Cygwin and git bash use. Other possibilities are that you're launching the script in a different directory and so on.
However, in general we'd expect a script to behave very similarly when launched via exec or via open |… r as they share a common core of functionality. The only differences are to do with how output and termination are waited for.
If you create a subprocess pipeline, by default you won't get to find out about errors from it until you close the pipeline. exec generates any errors “immediately” because it doesn't return control to you until the subprocess has terminated and all output has been read.

Run bash script loop in background which will write result of jar command to file

I'm novice to running bash script. (you can suggest me, if title I've given is incorrect.)
I want to run a jar file using bash script in loop. Then it should write the output of jar command into some file.
Bash file datagenerate.sh
#!/bin/bash
echo Total iterations are 500
for i in {1..500}
do
the_output="$(java -jar data-generator.jar 10 1 mockData.csv data_200GB.csv)"
echo $the_output
echo Iteration $i processed
done
no_of_lines="$(wc -l data_200GB.csv)"
echo "${no_of_lines}"
I'm running above script using command nohup sh datagenerate.sh > datagenerate.log &. As I want to run this script in background, so that even I log out from ssh it should keep running & output should go into datagenerate.log.
But when I ran above command and hit enter or close the terminal it ends the process. Only Total iterations are 500 is getting logged into output file.
Let me know what I'm missing. I followed following two links to create above shell script: link-1 & link2.
nohup sh datagenerate.sh > datagenerate.log &
nohup should work this way without using screen program, but depending on your distro your sh shell might be linked to dash.
Just make your script executable:
chmod +x datagenerate.sh
and run your command like this:
nohup ./datagenerate.sh > datagenerate.log &
You should check this out:
https://linux.die.net/man/1/screen
With this programm you can close your shell while a command or script is still running. They will not be aborted and you can pick the session up again later.

Writing a bash script, how do I stop my session from exiting when my script exits?

bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.

How do I call Ruby script from a shell script?

I am trying to write a watchdog for a Ruby application. So far, I have a cron job which is successfully calling a shell script:
#!/bin/sh
if ps -ef | grep -v grep | grep adpc.rb ; then
exit 0
else
NOW=$(date +"%m-%d-%Y"+"%T" )
echo "$NOW - CRITIC: ADPC service is down! Trying to initialize..." >> che.log
cd lib
nohup ruby adpc.rb &
exit 0
fi
This code runs correctly from command line, but I am not able to make the shell script execute the Ruby script when called from a cron job.
Please any help would be appreciated.
The Ruby file has +x permissions.
The nohup.out file is empty.
Solution: replace bare "ruby" command with full path (which ruby output).
Thanks to all for the replies =)
This is usually caused by an incorrect environment. Check the Ruby output in the created nohup.out file and log the stderr of nohup itself to a file.
It's frequently solved by starting the script with:
#!/bin/bash
source ~/.bash_profile
source ~/.bashrc
This will ensure that you run with bash instead of sh, and that any settings like PATH you've configured in your init files will be set.

How to source a csh script from inside a bash script

My default shell is bash. I have set some environment variables in my .bashrc file.
I installed a program which use .cshrc file. It contains the path to several cshell scripts.
When I run the following commands in the shell windows it works perfectly :
exec csh
source .cshrc
exec bash
I have tried to put these commands in bash script, unfortunately it didn't work.
is there another way to write a script in order to get the same result as running commands from a shell windows.
I hope my question is now clear
Many thanks for any help
WARNING : don't put the following script in your .bashrc, it will reload bash and so reload .bashrc again and again (stopable with C-c anyway)
Use preferable this script in your kit/CDS stuff startup script. (cadence presumably)
WARNING 2 : if anything in your file2source fails, the whole 'trick' stops.
Call this script : cshWrapper.csh
#! /bin/csh
# to launch using
# exec cshWrapper.csh file2source.sh
source $1
exec $SHELL -i
and launch it using
exec ./cshWrapper.csh file2source.sh
it will : launch csh, source your file and came back to the same parrent bash shell
Example :
$> ps
PID TTY TIME CMD
7065 pts/0 00:00:02 bash
$>exec ./cshWrapper.csh toggle.csh
file sourced
1
$> echo $$
7065
where in my case i use the file toggle.csh
#! /bin/csh
# source ./toggle.csh
if ! $?TOGGLE then
setenv TOGGLE 0
endif
if ($?TOGGLE) then
echo 'file sourced'
if ($TOGGLE == 0) then
setenv TOGGLE 1
else
setenv TOGGLE 0
endif
endif
echo $TOGGLE
Hope it helps
New proposal, since I faced another problem with exec.
exec kills whatever remains in the script, except if you force a fork by using a pipe after it `exec script |cat'. In such case if you have environment variable in the script, they are not spread back to the script itself, which is not what we want. The only solution I found is to use 3 files (let's call them for the example : main.bash that call first.cshrc and second.sh).
#! /bin/bash
#_main.bash_
exec /bin/csh -c "source /path_to_file/cshrc; exec /bin/bash -i -c /path_to_file/second.sh"
# after exec nothing remains (like Attila the Hun)
# the rest of the script is in 'second.sh'
With that manner, i can launch in a single script call, an old cshrc design kit, and still process some bash command after, and finally launch the main program in bash (let say virtuoso)

Resources