I am trying to run a long refresh script using shell nohup,
Script
#!/bin/bash
impala-shell -f Refresh.sql -i "landingarea"
But every time it hits an error it stops, I have to go into the script fix the error and re run from the beginning, id like it to just run to the end and I can pick up the errors, is this possible?
Shell
nohup sh Refresh.sh cat nohup.out
Please use -c Continues on query failure.
impala-shell -f -c Refresh.sql -i "landingarea"
if you donot want to capture the verbose/error message you can problably include --quiet option
impala-shell -f -c -quiet Refresh.sql -i "landingarea"
Please go though the documentation link below for more information.
http://www.cloudera.com/documentation/cdh/5-1-x/Impala/Installing-and-Using-Impala/ciiu_shell_options.html
Related
I have a batch file that contains this:
bash -c "shell/rsync_A.sh"
bash -c "shell/rsync_B.sh"
Each of the shell scripts look like this:
rsync_A.sh:
rsync --info=progress2 -rptz --delete -e "ssh -i /root/.ssh/[MY_CERT].pem" [MY_REMOTE_UBUNTU_ON_AWS]:[MY_REMOTE_FOLDER1] [MY_LOCAL_DESTINATION_FOLDER1]
rsync --info=progress2 -rptz --delete -e "ssh -i /root/.ssh/[MY_CERT].pem" [MY_REMOTE_UBUNTU_ON_AWS]:[MY_REMOTE_FOLDER2] [MY_LOCAL_DESTINATION_FOLDER2]
rsync_B.sh:
rsync --info=progress2 -rptz --delete -e "ssh -i /root/.ssh/[MY_CERT].pem" [MY_REMOTE_UBUNTU_ON_AWS]:[MY_REMOTE_FOLDER3] [MY_LOCAL_DESTINATION_FOLDER3]
The problem is that bash always, without fail, hangs when I run the batch file. The first rsync command always seems to run fine, the second always fails (whether inside the same sh file or a different one).
By "hangs" I mean that I see a blinking cursor but no bash prompt and there is no way to get out of it without restarting the entire system (lxssmanager hangs when attempting to restart).
Everything always runs 100% fine when I enter bash and run the shell scripts, but as soon as I get batch involved it breaks.
I have no idea why or how... but the solution was to uninstall BitDefender.
I am very new to shell scripting, and I am trying to write a shell pipeline that submits multiple qsub jobs, but has several commands to run in between these qsubs, which are contingent on the most recent job completing. I have been researching multiple ways to try and hold the shell script from proceeding after submission of a qsub job, but none have been successful.
The simplest chunk of code I can provide to illustrate the issue is as follows:
THREADS=`wc -l < list1.txt`
qsub -V -t 1-$THREADS firstjob.sh
echo "firstjob.sh completed"
There are obviously other lines of code after this that are actually contingent on firstjob.sh finishing, but I have omitted them here for clarity. I have tried the following methods of pausing/holding the script:
1) Only using wait, which is supposed to stop the script until all background programs are completed. This pushed right past the wait and printed the echo statement to the terminal while the array job was still running. My guess is this is occurring because once the qsub job is submitted, is exits and wait thinks it has completed?
qsub -V -t 1-$THREADS firstjob.sh
wait
echo "firstjob.sh completed"
2) Setting the job to a variable, echoing that variable to submit the job, and using the the entire job ID along with wait to pause. The echo command should wait until all elements of the array job have completed.The error message is shown following the code, within the code block.
job1=$(qsub -V -t 1-$THREADS firstjob.sh)
echo "$job1"
wait $job1
echo "firstjob.sh completed"
####ERROR RECEIVED####
-bash: wait: `4585057[].cluster-name.local': not a pid or valid job spec
3) Using the -sync y for qsub. This should prevent it from exiting the qsub until the job is complete, acting as an effective pause...I had hoped. Error in comment after the commands. For some reason it is not reading the -sync option correctly?
qsub -V -sync y -t 1-$THREADS firstjob.sh
echo "firstjob.sh completed"
####ERROR RECEIVED####
qsub: script file 'y' cannot be loaded - No such file or directory
4) Using a dummy shell script (the dummy just makes an empty file) so that I could use the -W depend=afterok: option of qsub to pause the script. This again pushes right past to the echo statement without any pause for submitting the dummy script. Both jobs get submitted, one right after the other, no pause.
job1=$(qsub -V -t 1-$THREADS demux.sh)
echo "$job1"
check=$(qsub -V -W depend=afterok:$job1 dummy.sh)
echo "$check"
echo "firstjob.sh completed"
Some further details regarding the script:
Each job submission is an array job.
The pipeline is being run in the terminal using a command resembling the following, so that I may provide it with 3 inputs: source Pipeline.sh -r list1.txt -d /workingDir/ -s list2.txt
I am certain that the firstjob.sh has not actually completed running because I see them in the queue when I use showq.
Perhaps there is an easy fix in most of these scenarios, but being new to all this, I am really struggling. I have to use this method in 8-10 places throughout the script, so it is really hindering progress. Would appreciate any assistance. Thanks.
POST EDIT 1
Here is the code contained in firstjob.sh...though doubtful that it will help. Everything in here functions as expected, always produces the correct results.
\#! /bin/bash
\#PBS -S /bin/bash
\#PBS -N demux
\#PBS -l walltime=72:00:00
\#PBS -j oe
\#PBS -l nodes=1:ppn=4
\#PBS -l mem=15gb
module load biotools
cd ${WORKDIR}/rawFQs/
INFILE=`head -$PBS_ARRAYID ${WORKDIR}${RAWFQ} | tail -1`
BASE=`basename "$INFILE" .fq.gz`
zcat $INFILE | fastx_barcode_splitter.pl --bcfile ${WORKDIR}/rawFQs/DemuxLists/${BASE}_sheet4splitter.txt --prefix ${WORKDIR}/fastqs/ --bol --suffix ".fq"
I just tried using -sync y, and that worked for me, so good idea there... Not sure what's different about your setup.
But a couple other things you could try involve your main script knowing the status of the qsub jobs you're running. One idea is that you could have your main script check the status of your job using qstat and wait until it finishes before proceeding.
Alternatively, you could have the first job write to a file as its last step (or, as you suggested, set up a dummy job that waits for the first job to finish). Then in your main script, you can test to see whether that file has been written before going on.
Say I want to run a C program 1000 times, and this program is basically a test script that tests the functionality of a simple kernel I have written. It outputs a "SUCCESS" every time it fails. Because of various race conditions that are hard to track down, we often have to run the test manually literally a few hundred times before it fails. I have tried searching the net in vain for perl scripts or bash scripts that can help us run this command:
pintos -v -k -T 60 --qemu -j 2 --filesys-size=2 -p tests/vm/page-parallel -a page-parallel -p tests/vm/child-linear -a child-linear --swap-size=4 -- -q -f run page-parallel < /dev/null
and pipe the command to something to check for a keyword so it can halt/continue if that keyword appears.
Anyone can point me in the right direction?
In bash you can just run it in a while loop:
while true; do
if "pintos -v -k -T 60 --qemu -j 2 --filesys-size=2 -p tests/vm/page-parallel -a page-parallel -p tests/vm/child-linear -a child-linear --swap-size=4 -- -q -f run page-parallel < /dev/null" | grep -c KEYWORD; then
break
fi
done
I'm not 100% sure about the quoting you'd need around the command, obviously I can't run your specific command. It may not need the "" around it.
grep -c counts the matches, if 0 then the KEYWORD was not found so it runs the loop again. If > 0 then the KEYWORD was found and the loop breaks out.
I have a basic inotifywait script called watch.sh and a few files ending in .styl in the same directory. Here's the script, that catches the changes, but doesn't execute the code within the do/done
I init it like sh watch.sh and here's the script
#!/bin/sh
while inotifywait -m -o ./log.txt -e modify ./*.styl; do
stylus -c %f
done
I tried having echo "hi" within the exec portion but nothing executes
The problem you are having is with the -m option for inotifywait. This causes the command to never exit. Since while checks the exit status of a command, the command must exit in order to continue execution of the loop.
Here is the description of -m from the manpage:
Instead of exiting after receiving a single event, execute
indefinitely. The default behaviour is to exit after the first
event occurs.
Removing the -m option should resolve your issues:
while inotifywait -o ./log.txt -e modify ./*.styl; do
stylus -c %f
done
Try this:
while K=`inotifywait -o ./log.txt --format %f -e modify ./*.styl`;
do
stylus -c $K;
done
I have a start server script
startserver.sh
It will run as background task startserver.sh &
the script need to run for sometime until it can really run in running state.
The running state could write into log file server.log when it is ready.
So I need to know when the server is really run by executing a bash cmd. If not, I need to wait until the Running state is shown in the server.log.
Can i achieve this in bash?
try something like this
mkfifo my_fifo
tail -f server.log >my_fifo &
tail_pid=$!
perl -ne '/pattern/&&exit' <my_fifo
kill $tail_pid
rm my_fifo
EDIT : perl command can be replaced with
grep -l 'pattern' <my_fifo >/dev/null
or if grep support this option
grep -q 'pattern' <my_fifo