How to read a line from continuously written log file - shell

I am new bee for shell scripting so if you find this post is redundant please redirect me to existing post.
I have jar command which run in background and keep on updating log file.
I am automating this process and writing shell script. I am using below code. Please help me to figure out what am I missing.
java -jar fileName.jar &
pid=$!
echo $!
cd /usr/ebp/logs/
logs=$(ls -t | head -n 1) #trying to read latest log file from directory
tail -f ${logs} | while read LOGLINE
do
if [ "${LOGLINE}" == *"Batch process completed successfully"* ]
then
echo ${LOGLINE}
echo "csv files created successfully"
break
fi
done
kill -9 ${pid}
Once batch process is completed I want to kill jar process id. But I am not able to read log file and it is stuck in while loop.

Related

How to grep the output of a command inside a shell script when scheduling using cron

I have a simple shell script where I need to check if my EMR job is running or not and I am just printing a log but it does not seem to work properly when scheduling the script using cron as it always prints the if block statement because the value of "status_live" var is always empty so if anyone can suggest what is wrong here otherwise on manually running the script it works properly.
#!/bin/sh
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Your script cannot run in cron because cron script has no environment context at all.
For example try to run your script as another use nobody that has no shell.
sudo -u nobody <script-full-path>
It will fail because it has no environment context.
The solution is to add your user environment context to your script. Just add source to your .bash_profile
sed -i "2a source $HOME/.bash_profile" <script-full-path>
Your script should look like:
#!/bin/sh
source /home/<your user name>/.bash_profile
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Now try to run it again with user nobody, if it works than cron will work as well.
sudo -u nobody <script-full-path>
Note that cron has no standard output. and you will need to redirect standard output from your script to a log file.
<script-full-path> >> <logfile-full-path>
# $? will have the last command status in bash shell scripting
# your complete command here below and status_live is 0 if it finds in grep (i.e. true in shell if condition.)
yarn application -list | grep -i "Streaming App"
status_live=$?
echo status_live: ${status_live}
if [ "$status_live" -eq 0 ]; then
echo "success
else
echo "fail"
fi

Condition to check if a backup ran successfully and start another one

I'm trying to set a condition to a crontab script for backup to don't start another backup if the last one is not yet completed or if the script is still running, in case a backup will run slower or something like this. For this I created something similar but linux first is creating the process and than will execute the scripts commands so it will allways exist with "process is running":
ps auxw | grep backup.sh | grep -v grep > /dev/null
if [ $? = 0 ]; then
echo "process is running"
exit 1
else
./backup.sh
fi
If the code snippet comes from backup.sh file, then you can put the above verification into a separate file. Then grep will not match "itself".
Another way is using additional in-use files. Create the in-use file and - in case when the file exists - exit 1. Just make sure the in-use file is removed after the script finishes.
#!/usr/bin/env bash
set -o errexit
trap cleanup ERR INT QUIT
cleanup()
{
rm -f "$INUSE"
}
INUSE=/home/abc/inuse/backup.inuse
if [ if -f "$INUSE" ]; then
echo "process is running"
exit 1
else
touch "$INUSE"
fi
# backup starts in here
# end of backup
cleanup

bash script to accept log on stdin and email log if inputting process fails

I'm a sysadmin and I frequently have a situation where I have a script or command that generates a lot of output which I would only like to have emailed to me if the command fails. It's pretty easy to write a script that runs the command, collects the output and emails it if the command fails, but I was thinking I should be able to write a command that
1) accepts log info on stdin
2) waits for the inputting process to exit and see what it's exit status was
3a) if the inputting process exited cleanly, append the logging input to a normal log file
3b) if the inputting process failed, append the logging input to the normal log and also send me an email.
It would look something like this on the command line:
something_important | mailonfail.sh me#example.com /var/log/normal_log
That would make it really easy to use in crontabs.
I'm having trouble figuring out how to make my script wait for the writing process and evaluate how that process exits.
Just to be exatra clear, here's how I can do it with a wrapper:
#! /bin/bash
something_important > output
ERR=$!
if [ "$ERR" -ne "0" ] ; then
cat something_important | mail -s "something_important failed" me#example.com
fi
cat something_important >> /var/log/normal_log
Again, that's not what I want, I want to write a script and pipe commands into it.
Does that make sense? How would I do that? Am I missing something?
Thanks Everyone!
-Dylan
Yes it does make sense, and you are close.
Here are some advises:
#!/bin/sh
TEMPFILE=$(mktemp)
trap "rm -f $TEMPFILE" EXIT
if [ ! something_important > $TEMPFILE ]; then
mail -s 'something goes oops' -a $TEMPFILE you#example.net
fi
cat $TEMPFILE >> /var/log/normal.log
I won't use bashisms so /bin/sh is fine
create a temporary file to avoid conflicts using mktemp(1)
use trap to remove file when the script exit, normally or not
if the command fail
then attach the file, which would or would not be preferred over embedding it
if it's a big file you could even gzip it, but the attachment method will change:
# using mailx
gzip -c9 $TEMPFILE | uuencode fail.log.gz | mailx -s subject ...
# using mutt
gzip $TEMPFILE
mutt -a $TEMPFILE.gz -s ...
gzip -d $TEMPFILE.gz
etc.

Creating lock files shell

I'm currently creating a lock folder which is created when my script runs, I also move files into sub folders here for processing. When the script ends a TRAP is called which removes the lock folder and contents, all of which is working fine. We had an issue the other day when someone pulled the power from one of the servers so my TRAP was never called so when re-booted the lock folder was still there which meant my scripts couldn't re-start until they were manually removed. What's the best way of checking if the script is already running ? I currently have this approach using process id's:
if ! mkdir $LOCK_DIR 2>/dev/null; then # Try to create the lock dir. This should pass successfully first run.
# If the lock dir exists
pid=$(cat $LOCK_DIR/pid.txt)
if [[ $(ps -ef | awk '{print $2}' | grep $pid | grep -v grep | wc -l) == 1 ]]; then
echo "Script is already running"
exit 1
else
echo "It looks like the previous script was killed. Restarting process."
# Do some cleanup here before removing dir and re-starting process.
fi
fi
# Create a file in the lock dir containing the pid. Echo the current process id into the file.
touch $LOCK_DIR/pid.txt
echo $$ > $LOCK_DIR/pid.txt
# Rest of script below
Checking /proc/ and cmdline is a good call - especially as at the moment you are simply checking that there isn't a process with the process id and not if the process is actually your script.
You could still do this with your ps command - which would offer some form of platform agnosticism.
COMMAND=$(ps -o comm= -p $pid)
if [[ $COMMAND == my_process ]]
then
.....
Note the command line arguments to ps limit it to command only with no header.
Many systems nowadays use tmpfs for directories like /tmp. These directories will therefore always be cleared after a reboot.
If using your pid file, note you can easily see the command
running under that pid in /proc/$pid/cmdline and /proc/$pid/exe.

background shell script terminating automatically

I've created a background shell to watch a folder (with inotifywait) and execute a process (a php script to send information to several other server and update a database, but I don't think that's relevant) when a new file is created in it.
My problem is that after some times the script is actually terminated, and I don't understand why (I redirected the output to a file not to fill up the buffer, even for php execution).
I'm using Ubuntu 12.04 server and latest version of php.
Here is my script:
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
You should take a look at nohup and screen this is exactly what you are looking for
Ok, after hours and hours i finally found a solution, it might (must) be a bit dirty but it works ! As i said in a previous command, i used the trap command, here is my final script :
#!/bin/sh
#get the script directory
SCRIPT=$(readlink -f "$0")
script_path=$(dirname "$SCRIPT")
#trap SIGHUP SIGINT SIGTERM and relaunch the script
trap "pkill -9 inotifywait;($SCRIPT &);exit" 1 2 15
for f in `ls "$script_path"/data/`
do
php myscript.php "$script_path"/data/$f &
done
#watch the directory for file creation
inotifywait -q -m --format %w%f -e create "$script_path"/data/ | while read -r line; do
php myscript.php "$line" &
done
hope it will help shell beginner as me :)
Edit : added "pkill -9 inotifywait" to make sure inotify process won't stack up,the parenthesis to make sure the new process is not a child of the current one, and exit to make sure the current process stops running

Resources