Deleting Inactive 5 days Old Files - bash

My aim is to delete more than 5 days old files which are no longer used by any process.
As a starter I have written following script, but it does not work, says line 10 command not found.
HOME=~/var
cd $HOME
for f in `find . -type f`; do
if [`lsof -n $f`]; then
echo $f
fi
done

hmm ye its not being done properly try this:
#!/bin/bash
DIR=$HOME/var
##########################################################
## files older than 5 days and recursive value set to 1
# for f in $(find $DIR -mtime +5 -maxdepth 1 -type f); do
##########################################################
for f in $(find $DIR -type f); do
# run lsof and look for pattern a-z send it to dev null
lsof -n $f |grep [a-z] > /dev/null
# If it was found the exit status will be 0 or success
if [ $? = 0 ]; then
echo "$f in use -->"
else
echo "File $f not in use"
fi
done
In your script you had defined HOME as ~/var - using squigly line in scripting I would try and stay away from. Secondly you were changing an environment variable's value within your script.. Try from your command line
env|grep HOME
This new method is a lot cleaner
Now here is another pointer that may mean you need to make further changes...
Will your script be running in a cron job ? will it be running as a cron entry as this existing user ? if it is set for root to run then above will fail I will show you how:
echo $HOME
/home/myuser
sudo -i
echo $HOME
/root
notice how the ~ or $HOME value for home has changed.. so if you do decide to run it as a cron entry as another user then try this
scriptuser="your_user"
getent passwd $scriptuser|awk -F":" '{print $6}'
if it is the current user that is then sudo su - or sudo -i then executing script then try :
getent passwd $(logname)|awk -F":" '{print $6}'

Related

bash script: Trying to redirect output to a new file created in the last day & 0 bytes in size

I've been working on a script which takes the output of the yum-cron command & redirects it to a new file. This part is working. However, if there's no new updates & no errors an empty file is created. I decided to try to get the script to write a simple "no new updates" message to the most recently created file.
The script looks like this:
#!/bin/bash
# Only run if this flag is set. The flag is created by the yum-cron init
# script when the service is started -- this allows one to use chkconfig and
# the standard "service stop|start" commands to enable or disable the yum-cron.
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
exit 0
fi
# Action!
/usr/sbin/yum-cron 2>&1 >> /var/log/yum-cron/yumCronUpdate-"$(date +"%F %T")"
# sleep in case yum-cron actually finds something to do?? Unsure if necessary??
sleep 30s
value=$(find /var/log/yum-cron/ -mtime -1 -size 0 -type f -iname "yumCronUpdate-2022*") | echo "Updates checked, no updates found on $(date + "%F +%T")" >> "$value"
I know that the script works up through line 12. In this form I get the error "line 13: No such file or directory." If I run line 13 from the terminal the message is appended to the most recent "yumCornUpdate" file - so it seems to actually work. My guess is that the $value variable isn't getting set as echo "$value" doesn't produce an output after the script has run.
If I modify the code to this:
#!/bin/bash
# Only run if this flag is set. The flag is created by the yum-cron init
# script when the service is started -- this allows one to use chkconfig and
# the standard "service stop|start" commands to enable or disable the yum-cron.
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
exit 0
fi
# Action!
exec /usr/sbin/yum-cron 2>&1 >> /var/log/yum-cron/yumCronUpdate-"$(date +"%F %T")"
# sleep in case yum-cron actually finds something to do?? Unsure if necessary??
sleep 30s
echo "Updates checked, no updates found on $(date + "%F +%T")" >> $(find /var/log/yum-cron/ -mtime -1 -size 0 -type f -iname "yumCronUpdate-2022*")
I get the error "line 13: $(find /var/log/yum-cron/ -mtime -1 -size 0 -type f -iname "yumCronUpdate-2022*"): ambiguous redirect
I'm not skilled at bash scripting, this is my first time (I'm used to PowerShell on Windows). I looked around at these topics as well:
Redirect bash output to file within the bash script
Getting an "ambiguous redirect" error
How do I redirect output to a variable in shell?
Any help would be appreciated!
This code is creating a pipeline.
value=$(find /var/log/yum-cron/ -mtime -1 -size 0 -type f -iname "yumCronUpdate-2022*") | \
echo "Updates checked, no updates found on $(date + "%F +%T")" >> "$value"
Basically:
The commands on the lefthand and righthand side get spawned in their own subshells. This is why you get no output from echo "$value" afterwards, the contents of the subshell are not visible from the parent process.
The output of the lefthand side is connected to the input of the righthand side...except that value=$( command ) doesn't output anything, and even if it did echo doesn't read from its standard input so that output wouldn't be used at all.
Rather than trying to capture the file in a variable and reuse it, find -exec can detect the files and print things into them for you at the same time.
find /var/log/yum-cron/ -mtime -1 -size 0 -type f -iname "yumCronUpdate-2022*" \
-exec /bin/bash -c 'echo "Updates checked, no updates found on $1" >> "$2"' \
_ "$(date +"%F %T")" {} \;
For each file matching the criteria, this command spawns a small bash script that prints your message and the current date to the matched file.

why 'ls' command printing the directory content multiple times

I have the following shell script in which I want to check the specific directory content on the remote machines and print them in a file.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line
if echo $f1 | grep -q "xx.xx.xxx";
then
ssh user#$f1 ls path/*war_* > path/$f1.txt < /dev/null; ls path/*zip_* >> path/$f1.txt < /dev/null;
ssh user#$f1 ls -d /apps/jetty*_* >> path/$f1.txt < /dev/null;
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
I am using this script inside a cron job for every 2 minute as following:
*/2 * * * * /path/myscript.sh
but somehow I am ending up with the following output file:
/apps/jetty/webapps_wars/test_new.war
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/jetty_xx.xx_2020-08-05
path/jetty_new
path/jetty_xx.xx_2020-08-05
path/jetty_new
I am not sure why am I getting the files in the list twice, sometimes 3 times. but I execute the shell directly from putty, it works fine. What do I need to change in order to correct this script?
Example:
~$ cd tmp
~/tmp$ mkdir test
~/tmp$ cd !$
cd test
~/tmp/test$ mkdir -p apps/jetty/webapp_wars/ && touch apps/jetty/webapp_wars/test_new.war
~/tmp/test$ mkdir path
~/tmp/test$ touch path/{ReleaseTest.static.zip_2020-08-05,jetty_xx.xx_2020-08-05,jetty_new}
~/tmp/test$ cd ..
~/tmp$ listpath=$(find test/path \( -name "*2020-08-05" -o -name "*new" \) )
~/tmp$ listapps=$(find test/apps/ -name "*war" )
~/tmp$ echo ${listpath[#]}" "${listapps[#]} | tr " " "\n" | sort > resultfile
~/tmp$
~/tmp$ cat resultfile
test/apps/jetty/webapp_wars/test_new.war
test/path/jetty_new
test/path/jetty_xx.xx_2020-08-05
test/path/ReleaseTest.static.zip_2020-08-05
~/tmp$ rm -rf test/ && unset listapps && unset listpath && rm resultfile
~/tmp$
This way you get only one result for each pattern you are looking for in your if...then...else block of code.
Just adapt the ssh ..... find commands and take care of quotes & parentheses but there is the easiest solution, this way you do not have to rewrite the script from scratch. And be careful on local / remote variables if you use them.
You really should not use ls but the fundamental problem is probably that three separate commands with three separate wildcards could match the same file three times.
Also, one of your commands is executed locally (you forgot to put ssh etc in front of the second one), so if the wildcard matches on your local computer, that would produce a result which doesn't reflect the situation on the remote server.
Try this refactoring.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line <- XXX this is not true
if echo "$f1" | grep -q "xx.xx.xxx";
then
ssh user#$f1 "printf '%s\n' path/*war_* path/*zip_* /apps/jetty*_*" | sort -u >path/"$f1".txt < /dev/null
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
The sort gets rid of any duplicates. This assumes none of your file names contain newlines; if they do, you'd need to use something which robustly handles them (try printf '%s\0' and sort -z but these are not portable).
ls would definitely also accept three different wildcards but like the link above explains, you really never want to use ls in scripts.

How to properly use command substitution in a select loop

The purpose of the script is to act as a directory navigator. Currently, I''m trying to print all the directories within the current directory using the fourth arguement in the select loop. I understand that I need to use command substitution but do not understand how to properly implement the backticks.
#! /bin/bash
echo"###################################################################"
pwd | ls -l
#problem with bad substitution below inbetween backticks
select choice in quit back jump ${`ls -l | egrep '^d' | awk $9`};
do
case $choice in
"quit")
echo "Quitting program"
exit 0
break
;;
"back")
cd ..
echo "Your have gone back to the previous directory: " `pwd`
pwd
ls -l
;;
"jump")
echo "Enter the directory you want to move into"
read inputDir
if [[ -d $inputdir ]]; then
cd $inputDir
pwd
ls -l
else
echo "Your input is not a directory, Please enter correct Di$
fi
;;
${ls -l | egrep '^d' | awk $9})
esac
done
You should really look at using shellcheck to lint your shell scripts.
I use mapfile to create an array based on output. I also use find instead ls of because it handles non-alphanumeric filenames better.
I then create an array with the output appended. There are different ways to do it, but this is most straight-forward. More information about bash arrays here.
#! /bin/bash
echo"###############################################################"
pwd # Your script had a |, it doesn't do anything since ls -l, doesn't take
# input from stdin. I seperated them, because that's probably what you want
ls -l
mapfile -t output < <(find . -type d -maxdepth 1 -not -name '.*' | sed -e 's/^\.\///')
choices=(quit back jump "${output[#]}")
select choice in "${choices[#]}"; do
case $choice in
"quit")
echo "Quitting program"
exit 0
break
;;
"back")
cd ..
echo "Your have gone back to the previous directory: $(pwd)"
pwd
ls -l
;;
"jump")
echo "Enter the directory you want to move into"
read -r inputDir
if [[ -d $inputDir ]]; then
cd "$inputDir" || exit
pwd
ls -l
else
echo "Your input is not a directory, Please enter correct Di$"
fi
;;
esac
done

cron not able to run the commands in shell

I am trying to run the following cron job from bash (RHEL 7.4), An entry level postgres DB backup script I could write:
#!/bin/bash
# find latest file
echo $PATH
cd /home/postgres/log/
echo "------------ backup starts-----------"
latest_file=$( ls -t | head -n 1 | grep '\.log$' )
echo "latest file"
echo $latest_file
# find older files than above
echo "old file"
old_file=$( find . -maxdepth 1 -name "postgresql*" ! -newer $latest_file -mmin +1 )
if [ -f "$old_file" ]
then
echo $old_file
file_name=${old_file##*/}
echo "file name"
echo $file_name
# zip older file
tar czvf /home/postgres/log/archived_logs/$old_file.gz /home/postgres/log/$file_name
rm -rf /home/postgres/log/$file_name
else
echo "no old file found"
fi
Above is running correctly from shell and performing the intended tasks. It is also echoing needed info.
I have installed it with postgres user (not root) with crontab -e
*/2 * * * * /home/postgres/log/rollup.sh >> /home/postgres/log/logfile.csv 2>&1
It is correctly echoing (text which I have embedded for testing) but not the commands output to the .csv. Although it is not my concern. My concern is , it is not running those few commands at all.
I have given another try by changing the log file (.csv) path to /dev/null and commands in shell script are executing. I am not getting what I am missing here.
.csv file has 777 as permission , just to test

bash check for subdirectories under directory

This is my first day scripting, I use linux but needed a script that I have been racking my brain until i finally ask for help. I need to check a directory that has directories already present to see if any new directories are added that are not expected.
Ok I think i have got this as simple as possible. The below works but displays all files in the directory as well. I will keep working at it unless someone can tell me how not to list the files too | I tried ls -d but it is doing the echo "nothing new". I feel like an idiot and should have got this sooner.
#!/bin/bash
workingdirs=`ls ~/ | grep -viE "temp1|temp2|temp3"`
if [ -d "$workingdirs" ]
then
echo "nothing new"
else
echo "The following Direcetories are now present"
echo ""
echo "$workingdirs"
fi
If you want to take some action when a new directory is created, used inotifywait. If you just want to check to see that the directories that exist are the ones you expect, you could do something like:
trap 'rm -f $TMPDIR/manifest' 0
# Create the expected values. Really, you should hand edit
# the manifest, but this is just for demonstration.
find "$Workingdir" -maxdepth 1 -type d > $TMPDIR/manifest
while true; do
sleep 60 # Check every 60 seconds. Modify period as needed, or
# (recommended) use inotifywait
if ! find "$Workingdir" -maxdepth 1 -type d | cmp - $TMPDIR/manifest; then
: Unexpected directories exist or have been removed
fi
done
Below shell script will show directory present or not.
#!/bin/bash
Workingdir=/root/working/
knowndir1=/root/working/temp1
knowndir2=/root/working/temp2
knowndir3=/root/working/temp3
my=/home/learning/perl
arr=($Workingdir $knowndir1 $knowndir2 $knowndir3 $my) #creating an array
for i in ${arr[#]} #checking for each element in array
do
if [ -d $i ]
then
echo "directory $i present"
else
echo "directory $i not present"
fi
done
output:
directory /root/working/ not present
directory /root/working/temp1 not present
directory /root/working/temp2 not present
directory /root/working/temp3 not present
**directory /home/learning/perl present**
This will save the available directories in a list to a file. When you run the script a second time, it will report directories that have been deleted or added.
#!/bin/sh
dirlist="$HOME/dirlist" # dir list file for saving state between runs
topdir='/some/path' # the directory you want to keep track of
tmpfile=$(mktemp)
find "$topdir" -type d -print | sort -o "$tmpfile"
if [ -f "$dirlist" ] && ! cmp -s "$dirlist" "$tmpfile"; then
echo 'Directories added:'
comm -1 -3 "$dirlist" "$tmpfile"
echo 'Directories removed:'
comm -2 -3 "$dirlist" "$tmpfile"
else
echo 'No changes'
fi
mv "$tmpfile" "$dirlist"
The script will have problems with directories that have very exotic names (containing newlines).

Resources