So i am writing a script to call a process 365 times and they should run in 10 batches, so this is something i wrote but there are multiple issues -
1. the log message is not getting written to the log file, i see the error message in err file
2. there is this "Command not found" error I keep getting from the script for the line process.
3. even if the command doesnt succeed, still it doesn't print FAIL but prints success
#!/bin/bash
set -m
FAIL=0
for i in {1..10}
do
waitPIDS=()
j=$i
while [ $j -lt 366 ]; do
exec 1>logfile
exec 2>errorfile
`process $j &`
waitPIDS[${#waitPIDS[#]}]=$!
j=$[$j+1]
done
for jpid in "${waitPIDS[#]}"
do
echo $jpid
wait $jpid
if [[ $? != 0 ]] ; then
echo "fail"
else
echo "success"
fi
done
done
What is wrong with it ?
thanks!
At the very least, this line:
`process $j &`
Shouldn't have any backticks in it. You probably just want:
process $j &
Besides that, you're overwriting your log files instead of appending to them; is that intended?
Related
There's an issue I am facing with the following command.
Output does not seem to be suppressed, and I keep getting 100s of lines printed to the terminal.
#!/bin/sh
n=0
until [[ $n -ge 1000 ]]
do
/usr/ud/runupdate $1 &>/dev/null && break
n=$[$n+1]
sleep 2
done
Are there any other things I can do to suppress output?
I have a bash script that loops through a folder and processes all *.hql files. Sometimes one of the hive script fails (syntax, resource constraint, etc), instead of the script failing it will continue onto the next .hql file.
Anyway I can stop the bash from processing the remaining? Below is my sample bash:
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} &
if [ $j -le 5 ]; then
j=$(( j+1 ))
else
wait
j=0
fi
done
I would check the process completion state of the previous command and invoke the exit command to come out the loop
(( $? == 0 )) && exit 1
Introduce the above line after the hive command and should do the trick.
add
set -e
to the top of your script
Use this template for running parallel processes and wait for their completion. Add your date, layer, hiveconf_all and other variables:
#!/bin/bash
set -e
#Run parallel processes and write their logs
log_dir=/tmp/my_script_logs
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
#Run hive in parallel and redirect to the log file
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} 2>&1 | tee "log_dir/${i}".log &
done
#Now wait for all processes to complete
FAILED=0
for job in `jobs -p`
do
echo "job=$job"
wait $job || let "FAILED+=1"
done
if [ "$FAILED" != "0" ]; then
echo "Execution FAILED! ($FAILED)"
#Do something here, log or send message, etc
exit 1
fi
#All processes are completed successfully!
#Do something here
echo "Done successfully"
Then you will be able to inspect each process log individually.
I wrote the following to be used with Jenkins to kick off my selenium tests. Now since they are being executed as background processes, when they fail Jenkins believes they haven't. As you can see my futile attempt on making this work. Any ideas? Thought about piping the output into a file and grepping for keywords. Then I realized I don't know how to reflect an error if a grep for string returns true.
#!/bin/bash
FAIL=0
echo "starting"
ruby /root/selenium-tests/test/test1.rb &
ruby /root/selenium-tests/test/test2.rb &
ruby /root/selenium-tests/test/test3.rb &
#wait
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "PASS"
else
echo "FAIL! ($FAIL)"
fi
Jenkins most likely looks at the process exit code to determine whether tests fails. This is what all Unix tools do.
There are multiple ways of doing this. If your test files output something like "FAIL" instead of properly returning an exit code, you can do:
#!/bin/bash
(
ruby /root/selenium-tests/test/test1.rb &
ruby /root/selenium-tests/test/test2.rb &
ruby /root/selenium-tests/test/test3.rb &
wait
) > log
! grep "FAIL" log
exit $? # <- happens implicitly at the end of the script, and can be left out
In this case, grep finding "FAIL" will cause the script to fail, and Jenkins to detect the failure.
The more correct way, if your scripts return proper exit codes, is your method but without relying on job control (which by default is turned off in non-interactive shells), and returning correct exit codes:
for test in /root/selenium-tests/test/test*.rb
do
ruby "$test" &
pids+=( $! )
done
for pid in "${pids[#]}"
do
if wait $pid
then
echo "$pid succeeded"
else
echo "$pid failed"
(( failures++ ))
fi
done
if [[ $failures -gt 0 ]]
then
echo "FAIL: $failures failed tests"
exit 1 # return failure
else
echo "PASS!"
exit 0 # return success
fi
Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.
I have a shell script which is executing a php script (worker for beanstalkd).
Here is the script:
#!/bin/bash
if [ $# -eq 0 ]
then
echo "You need to specify an argument"
exit 0;
fi
CMD="/var/webserver/user/bin/console $#";
echo "$CMD";
nice $CMD;
ERR=$?
## Possibilities
# 97 - planned pause/restart
# 98 - planned restart
# 99 - planned stop, exit.
# 0 - unplanned restart (as returned by "exit;")
# - Anything else is also unplanned paused/restart
if [ $ERR -eq 97 ]
then
# a planned pause, then restart
echo "97: PLANNED_PAUSE - wait 1";
sleep 1;
exec $0 $#;
fi
if [ $ERR -eq 98 ]
then
# a planned restart - instantly
echo "98: PLANNED_RESTART";
exec $0 $#;
fi
if [ $ERR -eq 99 ]
then
# planned complete exit
echo "99: PLANNED_SHUTDOWN";
exit 0;
fi
If I execute the script manually, like this:
[user#host]$ ./workers.sh
It's working perfectly, I can see the output of my PHP script.
But if I detach the process from the console, like this:
[user#host]$ ./workers.sh &
It's not working anymore. However I can see the process in the background.
[user#host]$ jobs
[1]+ Stopped ./workers.sh email
The Queue jobs server is filling with jobs and none of them are processed until I bring the detached script in the foreground, like this:
[user#host]$ fg
At this moment I see all the job being process by my PHP script. I have no idea why this is happening. Could you help, please?
Thanks, Maxime
EDIT:
I've create a shell script to run x workers, I'm sharing it here. Not sure it's the best way to do it but it's working well at the moment:
#!/bin/bash
WORKER_PATH="/var/webserver/user/workers.sh"
declare -A Queue
Queue[email]=2
Queue[process-images]=5
for key in "${!Queue[#]}"
do
echo "Launching ${Queue[$key]} instance(s) of $key Worker..."
CMD="$WORKER_PATH $key"
for (( l=1; l<=${Queue[$key]}; l++ ))
do
INSTANCE="$CMD $l"
echo "lnch instance $INSTANCE"
nice $INSTANCE > /dev/null 2> /dev/null &
done
done
Background processes are not allowed to write to the terminal, which your script tries to do with the echo statements. You just need to redirect standard output to a file when you put it to the background.
[user#host]$ ./workers.sh > workers.output 2> workers.error &
(I've redirected standard error as well, just to be safe.)