Piped wsadmin can't run scripts with flow control, because in that mode a newline separates commands.
Simple example from http://www-01.ibm.com/support/knowledgecenter/SSEQTP_7.0.0/com.ibm.websphere.base.iseries.doc/info/iseries/ae/cxml_jacl.html?lang=en :
set numbers {1 3 5 7 11 13}
foreach num $numbers {
puts $num
}
output:
[wasuser#oktest-prod-app-2 ~]$ ${WC_WSADMIN:?} -f test.jacl
WASX7209I: Connected to process "dmgr" on node EmProdDmgrNode using SOAP connector; The type of process is: DeploymentManager
1
3
5
7
11
13
[wasuser#oktest-prod-app-2 ~]$ ${WC_WSADMIN:?} <test.jacl
WASX7209I: Connected to process "dmgr" on node EmProdDmgrNode using SOAP connector; The type of process is: DeploymentManager
WASX7029I: For help, enter: "$Help help"
wsadmin>1 3 5 7 11 13
wsadmin>WASX7015E: Exception running command: "foreach num $numbers {"; exception information:
com.ibm.bsf.BSFException: error while eval'ing Jacl expression:
wsadmin>WASX7015E: Exception running command: "puts $num"; exception information:
com.ibm.bsf.BSFException: error while eval'ing Jacl expression:
can't read "num": no such variable
while executing
"puts $num"
wsadmin>WASX7015E: Exception running command: "}"; exception information:
com.ibm.bsf.BSFException: error while eval'ing Jacl expression:
invalid command name "}"
while executing
"}"
My script is generated. Beside storing it in a temporary file is there a workaround? I know it's possible to do this:
foreach num $numbers { puts $num }
but what if there must be more than one command in the block?
Use a semicolon:
foreach num $numbers { puts $num; puts $num }
But you're probably better off writing the script to a temporary file.
Related
I have the following command
ads2 cls create
This command might return two outputs, a reasonable one that looks like:
kernel with pid 7148 (port 9011) killed
kernel with pid 9360 (port 9011) killed
probing service daemon # http://fdt-c-vm-0093.fdtech.intern:9010
starting kernel FDT-C-VM-0093 # http://fdt-c-yy-0093.ssbt.intern:9011 name=FDT-C-VM-0093 max_consec_timeouts=10 clustermode=Standard hostname=FDT-C-VM-0093 framerate=20000 schedmode=Standard rtaddr=fdt-c-vm-0093.fdtech.ssbt tickrole=Local tickmaster=local max_total_timeouts=1000
kernel FDT-C-VM-0093 running
probing service daemon # http://172.16.xx.xx:9010
starting kernel FDT-C-AGX-0004 # http://172.16.xx.xx:9011 name=FDT-C-AGX-0004 max_consec_timeouts=10 clustermode=Standard hostname=FDT-C-AGX-0004 framerate=20000 schedmode=Standard rtaddr=172.16.xx.xx tickrole=Local tickmaster=local max_total_timeouts=1000
kernel Fxx-x-xxx-xxx4 running
>>> start cluster establish ...
>>> cluster established ...
nodes {
node {
name = "FDT-C-VM-xxxx";
address = "http://fxx-x-xx-0093.xxx.intern:xxxx/";
state = "3";
}
node {
name = "xxx-x-xxx-xxx";
address = "http://1xx.16.xx.xx:9011/";
state = "3";
}
}
and an unreasonable one that would be:
kernel with pid 8588 (port 9011) killed
failed to probe service daemon # http://xxx-c-agx-0002.xxxx.intern:90xx
In both ways, I'm passing this output to awk in order to check the state of the nodes in case a reasonable output is returned, otherwise it should exits the whole script (line 28).
ads2 cls create | awk -F [\"] ' BEGIN{code=1} # Set the field delimiter to a double quote
/^>>> cluster established .../ {
strt=1 # If the line starts with ">>> cluster established ...", set a variable strt to 1
}
strt!=1 {
next # If strt is not equal to 1, skip to the next line
}
$1 ~ "name" {
cnt++; # If the first field contains name, increment a cnt variable
nam[cnt]=$2 # Use the cnt variable as the index of an array called nam with the second field the value
}
$1 ~ "state" {
stat[cnt]=$2; # When the first field contains "state", set up another array called stat
print "Node "nam[cnt]" has state "$2 # Print the node name as well as the state
}
END {
if (stat[1]=="3" && stat[2]=="3") {
print "\033[32m" "Success" "\033[37m" # At the end of processing, the array is used to determine whether there is a success of failure.
}
28 else {
29 print "\033[31m" "Failed. Check Nodes in devices.dev file" "\033[37m"
30 exit code
}
}'
some other commands...
Note that this code block is a part of a bash script.
All I'm trying to do is just to stop the whole script (rest following commands) from continuing to execute when it goes inside line 29 in which the exit 1 code should actually do the job. However its not working. In other words. It prints actually the statement Failed. Check Nodes in devices.dev file. However, it continues executing the next commands while i expect the script to stop as the exit command in line 30 should have also been executed.
I suspect your subject Stop a bash script from inside an awk command is what's getting you downvotes as trying to control what the shell that called awk does from inside the awk script is something you can't and shouldn't try to do as that would be a bad case of Inversion Of Control like calling a function in C to do something and that function deciding to exit the whole program instead of just returning a failure status so the calling code can decide what to do upon that failure (e.g. perform recovery actions and then call that function again).
You seem to be confusing exiting your awk script with exiting your shell script. If you want to exit your shell script when the awk script exits with a failure status then you need to write the shell code to tell the shell to do so, e.g.:
whatever | awk 'script' || exit 1
or to get fancy about it:
whatever | awk 'script' || { ret="$?"; printf 'awk exited with status %d\n' "$ret" >&2; exit "$ret"; }
For example:
$ cat tst.sh
#!/usr/bin/env bash
date | awk '{exit 1}' || { ret="$?"; printf 'awk exited with status %d\n' "$ret" >&2; exit 1; }
echo "we should not get here"
$ ./tst.sh
awk exited with status 1
This is my script:
#!/bin/bash
JOB_NUM=4
function checkJobNumber() {
if (( $JOB_NUM < 1 || $JOB_NUM > 16 )); then
echo "pass"
fi
}
...
checkJobNumber
...
When I try to launch the script I get the message:
./script.sh line 49: ((: < 1 || > 16 : syntax error: operand expected (error token is "< 1 || > 16 ")
(Please notice the spaces in the error message)
I really don't understand what the problem is. If I do the evaluation manually at the command line, it works.
Also, I tried different evaluations like if [[ "$JOB_NUM" -lt 1 -o "$JOB_NUM" gt 16 ]];... still no success.
UPDATE: As suggested I made few more attempts in the rest of the code outside the function call, and I found the problem.
My variables declaration was actually indented this way:
JOB_NUM= 4
THREAD_NUM= 8
.....
VERY_LONG_VAR_NAME= 3
and apparently this hoses the evaulation. BAH! It always worked for me before, so why doesn’t it now?
If I delete the white spaces, the evaluation works:
JOB_NUM=4
THREAD_NUM=8
....
VERY_LONG_VAR_NAME=3
OK I officially hate bash . . . sigh :(
this will work fine
#!/bin/bash
JOB_NUM=7
function checkJobNumber() {
if [ $JOB_NUM -lt 0 ] || [ $JOB_NUM -gt 16 ] ; then
echo "true"
else
echo "false"
fi
}
checkJobNumber
And if you want to check variable during tests you can write in your bash :
set -u
this will generate a message like "JOB_NUM: unbound variable" if variable is not well set
In the process of coming up with a way to catch errors in my Bash scripts, I've been experimenting with "set -e", "set -E", and the "trap" command. In the process, I've discovered some strange behavior in how $LINENO is evaluated in the context of functions. First, here's a stripped down version of how I'm trying to log errors:
#!/bin/bash
set -E
trap 'echo Failed on line: $LINENO at command: $BASH_COMMAND && exit $?' ERR
Now, the behavior is different based on where the failure occurs. For example, if I follow the above with:
echo "Should fail at: $((LINENO + 1))"
false
I get the following output:
Should fail at: 6
Failed on line: 6 at command: false
Everything is as expected. Line 6 is the line containing the single command "false". But if I wrap up my failing command in a function and call it like this:
function failure {
echo "Should fail at $((LINENO + 1))"
false
}
failure
Then I get the following output:
Should fail at 7
Failed on line: 5 at command: false
As you can see, $BASH_COMMAND contains the correct failing command: "false", but $LINENO is reporting the first line of the "failure" function definition as the current command. That makes no sense to me. Is there a way to get the line number of the line referenced in $BASH_COMMAND?
It's possible this behavior is specific to older versions of Bash. I'm stuck on 3.2.51 for the time being. If the behavior has changed in later releases, it would still be nice to know if there's a workaround to get the value I want on 3.2.51.
EDIT: I'm afraid some people are confused because I broke up my example into chunks. Let me try to clarify what I have, what I'm getting, and what I want.
This is my script:
#!/bin/bash
set -E
function handle_error {
local retval=$?
local line=$1
echo "Failed at $line: $BASH_COMMAND"
exit $retval
}
trap 'handle_error $LINENO' ERR
function fail {
echo "I expect the next line to be the failing line: $((LINENO + 1))"
command_that_fails
}
fail
Now, what I expect is the following output:
I expect the next line to be the failing line: 14
Failed at 14: command_that_fails
Now, what I get is the following output:
I expect the next line to be the failing line: 14
Failed at 12: command_that_fails
BUT line 12 is not command_that_fails. Line 12 is function fail {, which is somewhat less helpful. I have also examined the ${BASH_LINENO[#]} array, and it does not have an entry for line 14.
For bash releases prior to 4.1, a special level of awful, hacky, performance-killing hell is needed to work around an issue wherein, on errors, the system jumps back to the function definition point before invoking an error handler.
#!/bin/bash
set -E
set -o functrace
function handle_error {
local retval=$?
local line=${last_lineno:-$1}
echo "Failed at $line: $BASH_COMMAND"
echo "Trace: " "$#"
exit $retval
}
if (( ${BASH_VERSION%%.*} <= 3 )) || [[ ${BASH_VERSION%.*} = 4.0 ]]; then
trap '[[ $FUNCNAME = handle_error ]] || { last_lineno=$real_lineno; real_lineno=$LINENO; }' DEBUG
fi
trap 'handle_error $LINENO ${BASH_LINENO[#]}' ERR
fail() {
echo "I expect the next line to be the failing line: $((LINENO + 1))"
command_that_fails
}
fail
BASH_LINENO is an array. You can refer to different values in it: ${BASH_LINENO[1]}, ${BASH_LINENO[2]}, etc. to back up the stack. (Positions in this array line up with those in the BASH_SOURCE array, if you want to get fancy and actually print a stack trace).
Even better, though, you can just inject the correct line number in your trap:
failure() {
local lineno=$1
echo "Failed at $lineno"
}
trap 'failure ${LINENO}' ERR
You might also find my prior answer at https://stackoverflow.com/a/185900/14122 (with a more complete error-handling example) interesting.
That behaviour is very reasonable.
The whole picture of the call stack provides comprehensive information whenever an error occurs. Your example had demonstrated a good error message; you could see where the an error actually occurred and which line triggered the function, etc.
If the interpreter/compiler can't precisely indicate where the error actually occurs, you could be more easily confused.
I recently discovered sys.process package in Scala, and was amused by its power.
But when I try to combine it with bash pipes and backticks, I get stuck.
This obviously doesn't work:
scala> "echo `date`" !!
res0: String = "
"`date`
"
I tried to use the bash executable to get the desired behavior:
scala> "bash -e echo `date`" !!
/bin/echo: /bin/echo: cannot execute binary file
java.lang.RuntimeException: Nonzero exit value: 126
What am I doing wrong?
Edit:
scala> "bash -ic 'echo `date`'" !!
`date`': unexpected EOF while looking for matching `''
`date`': syntax error: unexpected end of file
java.lang.RuntimeException: Nonzero exit value: 1
You're doing multiple things wrong actually. You should be using the -c option of bash and you should be using a Seq[String] with each parameter to bash in its own String, or the scala library will just split the String at every space character. (This is why Rex Kerr's solution doesn't work.)
scala> import sys.process.stringSeqToProcess
import sys.process.stringSeqToProcess
scala> Seq("bash", "-c", "echo `date`")!!
res20: String =
"Sun Dec 4 16:40:04 CET 2011
"
This program I use has it's own variables to set when you run it, so I want to set those variables and then greping the output then storing it inside a variable. However, I don't know how to go about this the correct way. The idea I have doesn't work. The focus is on lines 7 through 14.
1 #!/usr/local/bin/bash
2 source /home/gempak/NAWIPS/Gemenviron.profile
3 FILENAME="$(date -u '+%Y%m%d')_sao.gem"
4 SFFILE="$GEMDATA/surface/$FILENAME"
5 echo -n "Enter the station ID: "
6 read -e STATION
7 OUTPUT=$(sflist << EOF
8 SFFILE = $SFFILE
9 AREA = #$STATION
10 DATTIM = all
11 SFPARM = TMPF;DWPF
12 run
13 exit
14 EOF)
15 echo $OUTPUT
But I get this:
./listweather: line 7: unexpected EOF while looking for matching `)'
./listweather: line 16: syntax error: unexpected end of file
Putting together everyone's answers, I came across a working solution myself. This code works for me:
#!/usr/local/bin/bash
source /home/gempak/NAWIPS/Gemenviron.profile
FILENAME="$(date -u '+%Y%m%d')_sao.gem"
SFFILE="$GEMDATA/surface/$FILENAME"
echo -n "Enter the station ID: "
read -e STATION
OUTPUT=$(sflist << EOF
SFFILE = $SFFILE
AREA = #$STATION
DATTIM = ALL
SFPARM = TMPF;DWPF
run
exit
EOF
)
echo $OUTPUT | grep $STATION
Thanks everyone!
I'd put your program to run in a separate .sh script file, and then run the script from your first file, passing the arguments you want to pass as command line arguments. That way you can test them separately.
You could also do it in a function, but I like the modularity of the second script. I don't udnerstand exactly what you are trying to do above, but something like:
runsflist.sh:
#!/bin/bash
FILENAME="$(date -u '+%Y%m%d')_sao.gem"
SFFILE="$GEMDATA/surface/$FILENAME"
AREA = #$STATION
DATTIM = all
SFPARM = TMPF;DWPF
grep $STATION | sflist
main.sh:
#!/bin/bash
echo -n "Enter the station ID: "
read -e STATION
OUTPUT=`runsflist.sh`
echo $OUTPUT
If sflist needs interaction, I'd try something like this:
SFFILE=$(
( echo SFFILE = "$SFFILE"
echo AREA = "#$STATION"
echo DATTIM = all
echo SFPARM = TMPF;DWPF
echo run
cat
) | sflist)
Unfortunately, you have to type exit as part of the interaction.