accessing ERRORLEVEL from bash script - bash

I have an application that only works properly when called from a windows command prompt. Something to do with the input/output streams.
So I can call it from a bash script by passing it as an argument to cmd.
cmd /c "badapp"
This works fine - but occasionally badapp fails with network problems - and I get no feedback. Is there anyway to check the ERRORLEVEl from the bash script - or see the output from badapp on the terminal running the bash script?

Yes, $? is the variable that contains the error level.
Try echo $? for example.
An example from Cygwin bash (I'm guessing you are using Cygwin because you are using the Windows cmd in your example.)
susam#nifty /cygdrive/c/Documents and Settings/susam/Desktop
$ cmd /c "badapp"
'badapp' is not recognized as an internal or external command,
operable program or batch file.
susam#nifty/cygdrive/c/Documents and Settings/susam/Desktop
$ if [ $? -eq 0 ]
> then
> echo "good"
> else
> echo "bad"
> fi
bad

Related

$? from bash script command executed by TCL (open pipe) on windows returns wrong value

I've got tcl script with two ways of execution bash script:
#exec bash ./run.sh
open "|bash ./run.sh r"
The bash script is shown below:
#!/bin/bash
ls
if [ "$?" != "0" ]; then
echo "ERROR: Status failed!" > status
else
echo "Everything is OK!" > status
fi
I'm using tclsh for Windows with bash from git bash. When I use:
exec bash ./run.sh
I've got in status file:
Everything is OK!
otherwise:
open "|bash ./run.sh r"
got:
ERROR: Status failed!
Is there any possibility to correctly detect exit code when opened the tcl pipe?
You don't describe whether you get different results out of the ls part of the script. That matters; the ls command is most certainly capable of changing its behaviour according to the environment in which it is invoked. This matters because Tcl executes subprocesses (on Windows) directly using the CreateProcess() system call, rather than the various wrapped versions that Cygwin and git bash use. Other possibilities are that you're launching the script in a different directory and so on.
However, in general we'd expect a script to behave very similarly when launched via exec or via open |… r as they share a common core of functionality. The only differences are to do with how output and termination are waited for.
If you create a subprocess pipeline, by default you won't get to find out about errors from it until you close the pipeline. exec generates any errors “immediately” because it doesn't return control to you until the subprocess has terminated and all output has been read.

I want to call my script when command not found in bash (cmder)?

If the Command is not found in bash(cmder).
Then i need to call my batch or shell file.
and execute custom code to print result instead of default
bash: foo: command not found
Is there any settings in cmder or any possible other way to achieve this case.
If there is any other console emulator that can able to achieve this case?
you can check if command is callable with
if ! type COMMAND &>/dev/null; then
# not callable - here your script
fi
or after the call, if command not found (exitcode = 127)
COMMAND
if [[ "$?" == 127 ]]; then
# command unknown - here your script
fi

Error codes seemingly ignored by '&&' on Windows command prompt

I have some Batch scripts I use for automating application build processes, most of which involve chaining commands together using the && operator. Admittedly, I'm more experienced with Linux, but based on that experience some_command && other_command should result in other_command being run iff some_command returns an exit code of 0. This answer and this answer seem to agree with that. However this appears not to be the case on Windows cmd.exe, all of the scripts run regardless of the error code of the previous.
I decided to make a simple test for this to convince myself I wasn't going insane. Consider this test.bat, which returns an exit code of 1:
#echo off
EXIT /B 1
Running test.bat && echo This shouldn't print prints 'This shouldn't print'. But since the exit code is clearly 1, echo should not be called. I've tested that the error code was actually 1 using the %errorlevel% variable, they're coming out as expected (0 before I run the script, 1 after).
On Linux I tried the same thing. Here's test.sh:
#!/bin/bash
exit 1
Running ./test.sh && echo "This shouldn't print" gives no output, exactly what I expected.
What's going on here?
(Note: OS is Windows 7 Enterprise)
You need to use call to run the batch script, like this:
call test.bat && echo This shouldn't print
Without call, the && operator does not receive the ErrorLevel returned by the batch script.
When you run a batch file from within another one, you need to use call in order to return to the calling batch file; without call, execution terminates as soon as the called batch file has finished...:
call test.bat
echo This is going to be displayed.
...but:
test.bat
echo You will never see this!
When running test.bat is involved in a command line where multiple commands are combined (using the concatenation operator &, the conditional ones && and ||, or even a block of code within parentheses ()), all the commands following test.bat are ecexuted even if call was not used. This is because the entire command line/block has already been parsed by the command interpreter.
However, when call is used, the ErrorLevel value returned by the batch file is received (which is 1 in our situation) and the following commands behave accordingly:
call test.bat & echo This is always printed.
echo And this is also always printed.
call test.bat && echo This is not printed.
call test.bat || echo But this is printed.
(
call test.bat
echo This is printed too.
echo And again this also.
)
call test.bat & if ErrorLevel 1 echo This is printed.
But without call you will get this...:
test.bat & echo This is printed.
echo But this is not!
...and...:
test.bat && echo Even this is printed!
echo Neither is this!
...and...:
test.bat || echo But this is not printed!
echo And this is not either!
...and:
(
call test.bat
echo This is printed.
echo And this as well.
)
It seems that the && and || operators receive an ErrorLevel of 0 -- even in case ErrorLevel has already been set before test.bat is executed, strangely. Also when if ErrorLevel is used, the behaviour is similar:
test.bat & if ErrorLevel 1 echo This is not printed!
...and...:
set = & rem This constitutes a syntax error.
test.bat & if ErrorLevel 1 echo This is still not printed!
Note that the commands behind test.bat execute after the batch script, even without call.

Passing variables to sftp batch

I'm writing a script that needs to pass a variable to the sftp batch. I have been able to get some commands working based on other documentation I've searched out, but can't quite get to what I need.
The end-goal is to work similar to a file test operator on a remote server:
( if [-f $a ] then:; else exit 0;)
Ultimately, I want the file to continue running the script if the file exists (:), or exit 0 if it does NOT exist (not exit 1). The remote machine is a Windows server, not Linux.
Here's what I have:
NOTE - the variable I'm trying to pass, $source_dir, changes based on the input parameter of the script that calls this function. This and the ls wildcard is the tricky part. I have been able to make it work when looking for a specific file, but not just "any" file.
${source_dir}= /this/directory/changes
RemoteCheck () {
/bin/echo "cd $source_dir" > someBatch.txt
/bin/echo "ls *" >> someBatch.txt
/usr/bin/sftp -b testBatch.txt -oPort=${sftp_port} ${sftp_ip}
exit_code=$?;
if [ $exit_code -eq 0 ]; then
:
else
exit 0
fi
There may be a better way to do this, but I have searched multiple forums and have not yet found a way to manipulate this.
Any help is appreciated, you gurus have always been very helpful!
You cannot test for existence of any file using just exit code of the OpenSSH sftp.
You can redirect the sftp output to a file and parse it to see if there are any files.
You can use shell echo command to delimit the listing from the rest of the output like:
!echo listing-start
ls
!echo listing-end

Cygwin .sh file run as Windows Task Scheduler

Having issues getting this shell script to run in windows task scheduler.
#!/bin/bash
# Script to ping the VPN server for testing
RESULT=$(ping 192.168.1.252 | grep "Lost" | awk {' print $10 '})
LOG=/home/admin/results.txt
if [ "$RESULT" -gt 0 ];then
echo "VPN 192.168.1.252 NOT pinging" >> $LOG
else
echo "VPN Online"
fi
When I run it in cygwin, it runs with no issue, but when I attempt to run it from command prompt, I get the following:
C:\cygwin64\bin>bash test.sh
test.sh: line 4: grep: command not found
test.sh: line 4: awk: command not found
test.sh: line 7: [: : integer expression expected
My question is, how do I get it to run with bash instead so that it actually knows the grep and awk commands?
In Windows Scheduler, I have Action: Start A Program
Details: C:\cygwin64\bin\bash.exe
Argument: test.sh
Start in: C:\cygwin64\bin
Am I missing something?
I figured it out.
In the Windows Task Scheduler, I had to pass:
Program/script: C:\cygwin64\bin\bash.exe
Add arguments: -c -l test.sh
Start in: C:\cygwin64\bin
In correction to what Jimmy found:
Add arguments: -c -l "c:/FileFolder/test.sh"
You don't need the Start in argument anymore.
For the longest time I was experiencing the same issue as the OP: command not found errors when trying to run a shell script from Task Scheduler or the Command Prompt, despite the fact that running the same script from a Cygwin terminal worked fine.
After some more research I eventually realised that the reason was because my usual Bash PATH ~/.bashprofile wasn't being loaded, and that I needed to use Windows' Environment Variables window to add C:\cygin64\bin to my PATH environment variable (system or user, it doesn't really matter). This directory contains common system executables like grep and awk, which is why Bash is unable to locate them until the path is added to Windows' PATH.

Resources