Ignore errors while executing a shell script from another script - shell

I have a shell script,
sample.sh
cd /home/user/loc1
rm -rf `ct ls -l | grep 'view private object' | awk '{print $4}'`
cd /home/user/anotherloc
rm -rf `ct ls -l | grep 'view private object' | awk '{print $4}'`
cd /home/user/location3
rm -rf `ct ls -l | grep 'view private object' | awk '{print $4}'`
I'm executing the script from another script file.
build.sh
#!/bin/csh
source /home/user/scripts/sample.sh || true
#Some other commands
Now I'm executing the build.sh. The problem is sometimes the directories won't exist. (Eg : /home/user/anotherloc) and hence it stops from executing further showing "No such file or directory"
I tried || true to skip the error and continue executing. But it's not working. Is there anyway to skip those errors?
(I don't want to change the first script)

source /home/user/scripts/sample.sh > /dev/null 2>&1

Related

Dockerfile RUN echo command runs with /bin/sh instead of /bin/sh -c

This dockerfile has a spark path, retrieves the file name and installs the same. However, the script fails at the first echo command. It looks like the echo is being run as /bin/sh instead of /bin/sh -c.
How can I execute this echo command using /bin/sh -c? Is this the correct way to implement it, I'm planning on using the same logic for other installations such as Mongo, Node etc.
FROM ubuntu:18.04
ARG SPARK_FILE_LOCATION="http://www.us.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz"
CHAR_COUNT=`echo "${SPARK_FILE_LOCATION}" | awk -F"${DELIMITER}" '{print NF-1}'`
RUN echo $CHAR_COUNT
RUN CHAR_COUNT=`expr $CHAR_COUNT + 1`
RUN SPARK_FILE_NAME=`echo ${SPARK_FILE_LOCATION} | cut -f${CHAR_COUNT} -d"/"`
RUN Dir_name=`tar -tzf $SPARK_FILE_NAME | head -1 | cut -f1 -d"/"`
RUN echo Dir_name
/bin/sh: 1: 'echo http://www.us.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz | awk -F/ "{print NF-1}"': not found

Bash- Running a command on each grep correspondence without stopping tail -n0 -f

I'm currently monitoring a log file and my ultimate goal is to write a script that uses tail -n0 -f and execute a certain command once grep finds a correspondence. My current code:
tail -n 0 -f $logfile | grep -q $pattern && echo $warning > $anotherlogfile
This works but only once, since grep -q stops when it finds a match. The script must keep searching and running the command, so I can update a status log and run another script to automatically fix the problem. Can you give me a hint?
Thanks
use a while loop
tail -n 0 -f "$logfile" | while read LINE; do
echo "$LINE" | grep -q "$pattern" && echo "$warning" > "$anotherlogfile"
done
awk will let us continue to process lines and take actions when a pattern is found. Something like:
tail -n0 -f "$logfile" | awk -v pattern="$pattern" '$0 ~ pattern {print "WARN" >> "anotherLogFile"}'
If you need to pass in the warning message and path to anotherLogFile you can use more -v flags to awk. Also, you could have awk take the action you want instead. It can run commands via the system() function where you pass the shell command to run

Add a flag to bash script

I have the following bash script:
if
ps aux | grep -E "[i]tunes_exporter.py" > /dev/null
then
echo "Script is already running. Skipping"
else
"$DIR/itunes_exporter.py"
fi
I want to add an -f flag to the itunes_exporter.py command. For example:
"$DIR/itunes_exporter.py -f"
But then I get the following error:
-f: No such file or directory
How would I properly add the -f flag?
You should write it "$DIR/itunes_exporter.py" -f

run hadoop command in bash script

i need to run hadoop command in bash script, which go through bunch of folders on amazon S3, then write those folder names into a txt file, then do further process. but the problem is when i ran the script, seems no folder names were written to txt file. i wonder if it's the hadoop command took too long to run and the bash script didn't wait until it finished and go ahead to do further process, if so how i can make bash wait until the hadoop command finished then go do other process?
here is my code, i tried both way, neither works:
1.
listCmd="hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME"
echo -e "listing... $listCmd\n"
eval $listCmd
...other process ...
2.
echo -e "list the folders we want to copy into a file"
hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME
... other process ....
any one knows what might be wrong? and is it better to use the eval function or just use the second way to run hadoop command directly
thanks.
I would prefer to eval in this case, prettier to append the next command to this one. and I would rather break down listCmd into parts, so that you know there is nothing wrong at the grep, awk or cut level.
listCmd="hadoop fs -ls s3n://$AWS_ACCESS_KEY:$AWS_SECRET_KEY#$S3_BUCKET/*/*/$mydate > $raw_File"
gcmd="cat $raw_File | grep s3n | awk -F' ' '{print $6}' | cut -f 4- -d / > $FILE_NAME"
echo "Running $listCmd and other commands after that"
otherCmd="cat $FILE_NAME"
eval "$listCmd";
echo $? # This will print the exit status of the $listCmd
eval "$gcmd" && echo "Finished Listing" && eval "$otherCmd"
otherCmd will only be executed if $gcmd succeeds. If you have too many commands that you need to execute, then this becomes a bit ugly. If you roughly know how long it will take, you can insert a sleep command.
eval "$listCmd"
sleep 1800 # This will sleep 1800 seconds
eval "$otherCmd"

Updating crontab from a makefile

I'm trying to update the crontab from a GNU Make file.
The idea is like this: I look through the existing cron table and filter out all entries marked as mine (via the comment) and save that to a temporary file. Then I add my jobs to that temporary file and make it a new cron table. That way the make file can be run several times without harming other people's jobs.
This is the relevant part of the make file:
crontab.tmp: $(CRON_FILES)
#echo -n "Generating new cron table combining existing one with a new one ..."
if $$(crontab -l); then crontab -l | grep -v "MAX-CRON-JOB"; fi > $#
#cat $(CRON_FILES) | awk '{print $$0, " ## MAX-CRON-JOB"}' >> $#
#echo "OK"
.PHONY: cronjobs
cronjobs: crontab.tmp
#echo -n "Installing cron commands... "
#crontab $<
#echo "OK"
The troubling part is this line:
if $$(crontab -l); then crontab -l | grep -v "MAX-CRON-JOB"; fi > $#
When the cron table is empty it somehow breaks the make, while the respective generated bash command:
if $(crontab -l); then crontab -l | grep -v "MAX-CRON-JOB"; fi > crontab.tmp
Works OK from the command line.
Here is an error from the make (nothing particularly informative, if you ask me...):
Generating new cron table combining existing one with a new one ...if $(crontab -l); then crontab -l | grep -v "MAX-CRON-JOB"; fi > crontab.tmp
make: *** [crontab.tmp] Error 1
What am I missing here?
Why are you 'escaping' crontab? As far as I can tell,
if crontab -l; then crontab -l | grep -v "MAX-CRON-JOB"; fi > $#
should work fine.
Why use a conditional at all? If the cron table is empty, so be it:
crontab.tmp: $(CRON_FILES)
#echo -n "Generating new cron table..."
#crontab -l | grep -v "MAX-CRON-JOB" > $#
#cat $(CRON_FILES) | awk '{print $$0, " ## MAX-CRON-JOB"}' >> $#
#echo "OK"
Try changing this line to include a test (square brackets):
if [ $$(crontab -l) ]; then crontab -l | grep -v "MAX-CRON-JOB"; fi > $#
because, at least for me, this doesn't work at a bash prompt without it:
if $(crontab -l); then crontab -l | grep -v "MAX-CRON-JOB"; fi > $#
yields:
-bash: 0: command not found

Resources