Shell Script - How to compare the content of a directory from time to time - shell

I have an script that shows me the content of a directory every 2 seconds.
Now, I need to change it so I will be able to do something (e.g. echo it has changed) if the content of the directory has changed.
My script is as follow:
#!/bin/bash
MON_DIR="/home/lab"
if [ -d $MON_DIR ] ; then
echo "Directory exists."
while true
do
echo "Content of directory:"
ls $MON_DIR
sleep 2
done
else
echo "Directory does not exists." > /dev/stderr
exit $? > /dev/stderr
fi

Your task sounds like you want to try watch: It can run a command periodically and show its output. Using its -g (--chgexit) (exit when the output changes), you could try achieve what you want. I am thinking along the lines of (untested):
#!/bin/bash
MON_DIR="/home/lab"
if [ -d $MON_DIR ] ; then
echo "Directory exists."
while true
do
watch -n 2 -g "ls ${MON_DIR}" > /dev/null
echo "Content has changed."
done
else
echo "Directory does not exists." > /dev/stderr
exit $? > /dev/stderr
fi
I am suppressing output of watch here, to ensure you will be able to see the message. you might also replace the infinite loop (while true) with something that can be better aborted: Ctrl+C will abort watch and the loop will restart it. Thus you would have to hit Ctr+C twice in a short interval.

Related

How to display long script logs to one liner?

Lets Say I have multiple scripts which need to invoke sequentially for the job.
These scripts has long and lengthy output this is my bash script,
How to avoid that but still can understand that the process is running.
Here is an example,
#!/bin/bash
echo "Script to prepare Final BUILD"
rm -vf module1.out
module1_build_script.sh #FIXME: This scripts outputs 10000 lines
#module1_build_script.sh &> /dev/null #Not interested as this makes difficult if the process hangs or running.
if [ ! -f ./out/module1.out ];then
echo "Module 1 build failed"
exit 1
fi
.
.
.
rm -vf module1.out
module4_build_script.sh # This scripts outputs 5000 lines
if [ ! -f ./out/module4.out ];then
echo "Module 4 build failed"
exit 4
fi
Now I am expecting some code gives me effect like below output as one liner without scroll
example: module1_build_script.sh | "magical code here" #FIXME:
Like below output
user#bash#./myscript
#-------content of myscript ---------------
#!/bin/bash
while (( i < 10))
do
echo -en "\r Process is running...$i"
sleep 0.5
((i++))
done
#------------------------------------------

How to output error in custom color in Vagrant provision script?

I want my Vagrant provision script to run some checks that will require user action if they're not satisfied. As easy as:
if [ ! -f /some/required/file ]; then
echo "[Error] Please do required stuff before provisioning"
exit
fi
But, as long as this is not a real error, I got the echo printed in green. I'd like my output to be red (or, a different color at least) to alert the user.
I tried:
echo "\033[31m[Error] Blah blah blah"
that works locally, but on Vagrant output the color code gets escaped and I got it echoed in green instead.
Is that possible?
This is happening because some tools write some of their messages to stderr, which Vagrant then interprets as an error and prints in red.
Not all terminals support ANSI colour codes and Vagrant don't take care of that. Said that, I won't suggest colorizing a word by sending it to stderr unless it is an error.
To achieve that you can simply:
echo "Your error message" > /dev/stderr
You need to use keep_color true then it works as intended;
config.vm.provision "shell", keep_color: true, inline: $echoes
$echoes = <<-ECHOES
echo "\e[32mPROVISIONING DONE\e[0m"
ECHOES
From https://www.vagrantup.com/docs/provisioning/shell.html
keep_color (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted.
Vagrant commands runs by default with --no-color option. You could try to set color on with --color. The environmental variables for Vagrant are documented here.
Here is a bash script test.sh which should demonstrate how to output to stderr or stdout conditionally. This form is good for a command like [ / test or touch that does not return any stdout or stderr normally. This form is checking the exit status code of the command which is stored in $?.
test -f $1
if [ $? -eq 0 ]; then
echo "File exists: $1"
else
echo "File not found: $1"
fi
You can alternatively hard code your file path like your question shows:
file="/some/required/file"
test -f $file
if [ $? -eq 0 ]; then
echo "File exists: $file"
else
echo "File not found: $file"
fi
If you have output of the command, but its being sent to stderr rather than stdout and ending up in red in the Vagrant output, you can use the following forms to redirect the output to where you would expect it to be. This is good for commands like update-grub or wget.
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
update-grub
out=$(update-grub 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
One Liners
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1) && echo "$out" || echo "$out" > /dev/stderr
update-grub
out=$(update-grub 2>&1) && echo "$out" || echo "$out" > /dev/stderr

How to proceed in the script if file exists ?

how to proceed in the script if file exists?
#!/bin/bash
echo "Start"
# waiting to be exist file
echo "file already exists, continuing"
Do a while if a sleep X, so that it will check the existence of the file every X seconds.
When the file will exist, the while will finish and you will continue with the echo "file already exists, continuining".
#!/bin/bash
echo "Start"
### waiting to be exist file
while [ ! -f "/your/file" ]; # true if /your/file does not exist
do
sleep 1
done
echo "file already exists, continuing"
And goes instead of checking the file existence check if the script
has already completed the background?
Based on the code you posted, I did some changes to make it work completely:
#!/bin/bash
(
sleep 5
) &
PID=$!
echo "the pid is $PID"
while [ ! -z "$(ps -ef | awk -v p=$PID '$2==p')" ]
do
echo "still running"
sleep 1
done
echo "done"
There are OS-specific ways to perform blocking waits on the file system. Linux uses inotify (I forget the BSD equivalent). After installing inotify-tools, you can write code similar to
#!/bin/bash
echo "Start"
inotifywait -e create $FILE & wait_pid=$!
if [[ -f $FILE ]]; then
kill $wait_pid
else
wait $wait_pid
fi
echo "file exists, continuing"
The call to inotifywait does not exit until it receives notification from the operating system that $FILE has been created.
The reason for not simply calling inotifywait and letting it block is that there is a race condition: the file might not exist when you test for it, but it could be created before you can start watching for the creation event. To fix that, we start a background process that waits for the file to be created, then check if it exists. If it does, we can kill inotifywait and proceed. If it does not, inotifywait is already watching for it, so we are guaranteed to see it be created, so we simply wait on the process to complete.
To fedorqui: Is it so good? There is a problem?
#!/bin/bash
(
..
my code
..
) &
PID=$BASHPID or PID=$$
while [ ! ps -ef | grep $PID ]
do
sleep 0
done
Thank you

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

How to check in a bash script if something is running and exit if it is

I have a script that runs every 15 minutes but sometimes if the box is busy it hangs and the next process will start before the first one is finished creating a snowball effect. How can I add a couple lines to the bash script to check to see if something is running first before starting?
You can use pidof -x if you know the process name, or kill -0 if you know the PID.
Example:
if pidof -x vim > /dev/null
then
echo "Vim already running"
exit 1
fi
Why don't set a lock file ?
Something like
yourapp.lock
Just remove it when you process is finished, and check for it before to launch it.
It could be done using
if [ -f yourapp.lock ]; then
echo "The process is already launched, please wait..."
fi
In lieu of pidfiles, as long as your script has a uniquely identifiable name you can do something like this:
#!/bin/bash
COMMAND=$0
# exit if I am already running
RUNNING=`ps --no-headers -C${COMMAND} | wc -l`
if [ ${RUNNING} -gt 1 ]; then
echo "Previous ${COMMAND} is still running."
exit 1
fi
... rest of script ...
pgrep -f yourscript >/dev/null && exit
This is how I do it in one of my cron jobs
lockfile=~/myproc.lock
minutes=60
if [ -f "$lockfile" ]
then
filestr=`find $lockfile -mmin +$minutes -print`
if [ "$filestr" = "" ]; then
echo "Lockfile is not older than $minutes minutes! Another $0 running. Exiting ..."
exit 1
else
echo "Lockfile is older than $minutes minutes, ignoring it!"
rm $lockfile
fi
fi
echo "Creating lockfile $lockfile"
touch $lockfile
and delete the lock file at the end of the script
echo "Removing lock $lockfile ..."
rm $lockfile
For a method that does not suffer from parsing bugs and race conditions, check out:
BashFAQ/045 - How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I had recently the same question and found from above that kill -0 is best for my case:
echo "Starting process..."
run-process > $OUTPUT &
pid=$!
echo "Process started pid=$pid"
while true; do
kill -0 $pid 2> /dev/null || { echo "Process exit detected"; break; }
sleep 1
done
echo "Done."
To expand on what #bgy says, the safe atomic way to create a lock file if it doesn't exist yet, and fail if it doesn't, is to create a temp file, then hard link it to the standard lock file. This protects against another process creating the file in between you testing for it and you creating it.
Here is the lock file code from my hourly backup script:
echo $$ > /tmp/lock.$$
if ! ln /tmp/lock.$$ /tmp/lock ; then
echo "previous backup in process"
rm /tmp/lock.$$
exit
fi
Don't forget to delete both the lock file and the temp file when you're done, even if you exit early through an error.
Use this script:
FILE="/tmp/my_file"
if [ -f "$FILE" ]; then
echo "Still running"
exit
fi
trap EXIT "rm -f $FILE"
touch $FILE
...script here...
This script will create a file and remove it on exit.

Resources