bash check for subdirectories under directory - bash

This is my first day scripting, I use linux but needed a script that I have been racking my brain until i finally ask for help. I need to check a directory that has directories already present to see if any new directories are added that are not expected.
Ok I think i have got this as simple as possible. The below works but displays all files in the directory as well. I will keep working at it unless someone can tell me how not to list the files too | I tried ls -d but it is doing the echo "nothing new". I feel like an idiot and should have got this sooner.
#!/bin/bash
workingdirs=`ls ~/ | grep -viE "temp1|temp2|temp3"`
if [ -d "$workingdirs" ]
then
echo "nothing new"
else
echo "The following Direcetories are now present"
echo ""
echo "$workingdirs"
fi

If you want to take some action when a new directory is created, used inotifywait. If you just want to check to see that the directories that exist are the ones you expect, you could do something like:
trap 'rm -f $TMPDIR/manifest' 0
# Create the expected values. Really, you should hand edit
# the manifest, but this is just for demonstration.
find "$Workingdir" -maxdepth 1 -type d > $TMPDIR/manifest
while true; do
sleep 60 # Check every 60 seconds. Modify period as needed, or
# (recommended) use inotifywait
if ! find "$Workingdir" -maxdepth 1 -type d | cmp - $TMPDIR/manifest; then
: Unexpected directories exist or have been removed
fi
done

Below shell script will show directory present or not.
#!/bin/bash
Workingdir=/root/working/
knowndir1=/root/working/temp1
knowndir2=/root/working/temp2
knowndir3=/root/working/temp3
my=/home/learning/perl
arr=($Workingdir $knowndir1 $knowndir2 $knowndir3 $my) #creating an array
for i in ${arr[#]} #checking for each element in array
do
if [ -d $i ]
then
echo "directory $i present"
else
echo "directory $i not present"
fi
done
output:
directory /root/working/ not present
directory /root/working/temp1 not present
directory /root/working/temp2 not present
directory /root/working/temp3 not present
**directory /home/learning/perl present**

This will save the available directories in a list to a file. When you run the script a second time, it will report directories that have been deleted or added.
#!/bin/sh
dirlist="$HOME/dirlist" # dir list file for saving state between runs
topdir='/some/path' # the directory you want to keep track of
tmpfile=$(mktemp)
find "$topdir" -type d -print | sort -o "$tmpfile"
if [ -f "$dirlist" ] && ! cmp -s "$dirlist" "$tmpfile"; then
echo 'Directories added:'
comm -1 -3 "$dirlist" "$tmpfile"
echo 'Directories removed:'
comm -2 -3 "$dirlist" "$tmpfile"
else
echo 'No changes'
fi
mv "$tmpfile" "$dirlist"
The script will have problems with directories that have very exotic names (containing newlines).

Related

Bash script backup, check if directory contains the files from another directory

I am making a bash backup script and I want to implement a functionality that checks if the files from a directory are already contained in another directory, if they are not I want to output the name of these files
#!/bin/bash
TARGET_DIR=$1
INITIAL_DIR=$2
TARG_ls=$(ls -A $1)
INIT_ls=$(ls -A $2)
if [[ "$(ls -A $2)" ]]; then
if [[ ! -n "$(${TARG_ls} | grep ${INIT_ls})" ]]; then
echo All files in ${INITIAL_DIR} have backups for today in ${TARGET_DIR}
exit 0
else
#code for listing the missing files
fi
else
echo Error!! ${INITIAL_DIR} has no files
exit 1
fi
I have thought about storing the ls output of both directories inside strings and comparing them, as it is shown in the code, but in the event where I have to list the files from INITIAL_DIR that are missing in TARGET_DIR, I just don't know how to proceed.
I tried using the diff command comparing the two directories but that takes into account the preexisting files of TARGET_DIR.
In the above code if [[ "$(ls -A $2)" ]]; checks if the CURRENT_DIR contains any files and if [[ ! -n "$(${TARG_ls} | grep ${INIT_ls})" ]]; checks if the target directory contains all the initial directory files.
Anyone have a suggestion, hint?
you can use comm command
$ comm <(ls -A a) <(ls -A b)
will give you files in a only, both in a and b, and in only b in three columns. To get the list of files in a only for example
$ comm -23 <(ls -A a) <(ls -A b)
rsync has a --dry-run switch that will show you what files have changed between 2 directories. Before doing rsync copies of my home directory I preview the changes this way to see if there could be evidence of mass mal encryption or corruption before proceeding.

Change date modified of multiple folders to match that of their most recently modified file

I've been using the following shell bin/bash script as an app which I can drop a folder on, and it will update the date modified of the folder to match the most recently modified file in that folder.
for f in each "$#"
do
echo "$f"
done
$HOME/setMod "$#"
This gets the folder name, and then passes it to this setMod script in my home folder.
#!/bin/bash
# Check that exactly one parameter has been specified - the directory
if [ $# -eq 1 ]; then
# Go to that directory or give up and die
cd "$1" || exit 1
# Get name of newest file
newest=$(stat -f "%m:%N" * | sort -rn | head -1 | cut -f2 -d:)
# Set modification date of folder to match
touch -r "$newest" .
fi
However, if I drop more than one folder on it at a time, it won't work, and I can't figure out how to make it work with multiple folders at once.
Also, I learned from Apple Support that the reason so many of my folders keep getting the mod date updated is due to some Time Machine-related process, despite the fact I haven't touched some of them in years. If anyone knows of a way to prevent this from happening, or to somehow automatically periodically update the date modified of folders to match the date/time of the most-recently-modified file in them, that would save me from having to run this step manually pretty regularly.
The setMod script current accepts only one parameter.
You could either make it accept many parameters and loop over them,
or you could make the calling script use a loop.
I take the second option, because the caller script has some mistakes and weak points. Here it is corrected and extended for your purpose:
for dir; do
echo "$dir"
"$HOME"/setMod "$dir"
done
Or to make setMod accept multiple parameters:
#!/bin/bash
setMod() {
cd "$1" || return 1
# Get name of newest file
newest=$(stat -f "%m:%N" * | sort -rn | head -1 | cut -f2 -d:)
# Set modification date of folder to match
touch -r "$newest" .
}
for dir; do
if [ ! -d "$dir" ]; then
echo not a directory, skipping: $dir
continue
fi
(setMod "$dir")
done
Notes:
for dir; do is equivalent to for dir in "$#"; do
The parentheses around (setMod "$dir") make it run in a sub-shell, so that the script itself doesn't change the working directory, the effect of the cd operation is limited to the sub-shell within (...)

shell backup script renaming

I was able to script the backup process, but I want to make an another script for my storage server for a basic file rotation.
What I want to make:
I want to store my files in my /home/user/backup folder. Only want to store the 10 most fresh backup files and name them like this:
site_foo_date_1.tar site_foo_date_2.tar ... site_foo_date_10.tar
site_foo_date_1.tar being the most recent backup file.
Past num10 the file will be deleted.
My incoming files from the other server are simply named like this: site_foo_date.tar
How can I do this?
I tried:
DATE=`date "+%Y%m%d"`
cd /home/user/backup/com
if [ -f site_com_*_10.tar ]
then
rm site_com_*_10.tar
fi
FILES=$(ls)
for file in $FILES
do
echo "$file"
if [ "$file" != "site_com_${DATE}.tar" ]
then
str_new=${file:18:1}
new_str=$((str_new + 1))
to_rename=${file::18}
mv "${file}" "$to_rename$new_str.tar"
fi
done
file=$(ls | grep site_com_${DATE}.tar)
filename=`echo "$file" | cut -d'.' -f1`
mv "${file}" "${filename}_1.tar"
The main issue with your code is that looping through all files in the directory with ls * without some sort of filter is a dangerous thing to do.
Instead, I've used for i in $(seq 9 -1 1) to loop through files from *_9 to *_1 to move them. This ensures we only move backup files, and nothing else that may have accidentally got into the backup directory.
Additionally, relying on the sequence number to be the 18th character in the filename is also destined to break. What happens if you want more than 10 backups in the future? With this design, you can change 9 to be any number you like, even if it's more than 2 digits.
Finally, I added a check before moving site_com_${DATE}.tar in case it doesn't exist.
#!/bin/bash
DATE=`date "+%Y%m%d"`
cd "/home/user/backup/com"
if [ -f "site_com_*_10.tar" ]
then
rm "site_com_*_10.tar"
fi
# Instead of wildcarding all files in the directory
# this method picks out only the expected files so non-backup
# files are not changed. The renumbering is also made easier
# this way.
# Loop through from 9 to 1 in descending order otherwise
# the same file will be moved on each iteration
for i in $(seq 9 -1 1)
do
# Find and expand the requested file
file=$(find . -maxdepth 1 -name "site_com_*_${i}.tar")
if [ -f "$file" ]
then
echo "$file"
# Create new file name
new_str=$((i + 1))
to_rename=${file%_${i}.tar}
mv "${file}" "${to_rename}_${new_str}.tar"
fi
done
# Check for latest backup file
# and only move it if it exists.
file=site_com_${DATE}.tar
if [ -f $file ]
then
filename=${file%.tar}
mv "${file}" "${filename}_1.tar"
fi

Passing a path as an argument to a shell script

I've written bash script to open a file passed as an argument and write it into another file. But my script will work properly only if the file is in the current directory. Now I need to open and write the file that is not in the current directory also.
If compile is the name of my script, then ./compile next/123/file.txt should open the file.txt in the passed path. How can I do it?
#!/bin/sh
#FIRST SCRIPT
clear
echo "-----STARTING COMPILATION-----"
#echo $1
name=$1 # Copy the filename to name
find . -iname $name -maxdepth 1 -exec cp {} $name \;
new_file="tempwithfile.adb"
cp $name $new_file #copy the file to new_file
echo "compiling"
dir >filelist.txt
gcc writefile.c
run_file="run_file.txt"
echo $name > $run_file
./a.out
echo ""
echo "cleaning"
echo ""
make clean
make -f makefile
./semantizer -da <withfile.adb
Your code and your question are a bit messy and unclear.
It seems that you intended to find your file, given as a parameter to your script, but failed due to the maxdepth.
If you are given next/123/file.txt as an argument, your find gives you a warning:
find: warning: you have specified the -maxdepth option after a
non-option argument -iname, but options are not positional (-maxdepth
affects tests specified before it as well as those specified after
it). Please specify options before other arguments.
Also -maxdepth gives you the depth find will go to find your file until it quits. next/123/file.txt has a depth of 2 directories.
Also you are trying to copy the given file within find, but also copied it using cp afterwards.
As said, your code is really messy and I don't know what you are trying to do. I will gladly help, if you could elaborate :).
There are some questions that are open:
Why do you have to find the file, if you already know its path? Do you always have the whole path given as an argument? Or only part of the path? Only the basename ?
Do you simply want to copy a file to another location?
What does your writefile.c do? Does it write the content of your file to another? cp does that already.
I also recommend using variables with CAPITALIZED letters and checking the exit status of used commands like cp and find, to check if these failed.
Anyway, here is my script that might help you:
#!/bin/sh
#FIRST SCRIPT
clear
echo "-----STARTING COMPILATION-----"
echo "FILE: $1"
[ $# -ne 1 ] && echo "Usage: $0 <file>" 1>&2 && exit 1
FILE="$1" # Copy the filename to name
FILE_NEW="tempwithfile.adb"
cp "$FILE" "$FILE_NEW" # Copy the file to new_file
[ $? -ne 0 ] && exit 2
echo
echo "----[ COMPILING ]----"
echo
dir &> filelist.txt # list directory contents and write to filelist.txt
gcc writefile.c # ???
FILE_RUN="run_file.txt"
echo "$FILE" > "$FILE_RUN"
./a.out
echo
echo "----[ CLEANING ]----"
echo
make clean
make -f makefile
./semantizer -da < withfile.adb

Remove the bottom level from a path

I am writing a script where I want to check if a folder relative to the script's working directory is in the shell's path.
For example, if the project structure is:
top/
bin/
tests/
test1/
foo.sh
test2/
foo.sh
I want to check if top/bin is in PATH from either foo.sh. I know the following will work:
cd ../../bin
if echo $PATH|grep `pwd`; then
echo "Success"
else
echo "Failure"
fi
But then I have to keep track of what directory I started in so I can cd back there. Can I do something like chopping off the last two directories after pwd and then appending bin to that? Is there some other intelligent way to handle this?
As a bonus, I'd like to make this script robust against additional directory levels inside tests, but that's not strictly necessary if it complicates things.
# Exits with code 0 if the first argument is on the user's path.
is_on_path() {
CANONICAL_NAME=$(readlink -f "$1")
while read -r -d: COMPONENT; do
if [[ "$CANONICAL_NAME" = "$COMPONENT" ]]; then
return 0
fi
done <<< "$PATH"
return 1
}
readlink -f will evaluate path components like ../.. as needed.
read -r -d: COMPONENT will read a colon-separated list of arguments from its input, into the COMPONENT variable.
<<< "$PATH" will pass in the system path into while's stdin.
Now you can call that like this:
if is_on_path "../../bin"; then
echo "Success!"
else
echo "Failure!"
fi
Bonus: From here it's pretty easy to recursively apply this to all parent directories:
find_matching_ancestor() {
current_dir=$(pwd)
while [[ "$current_dir" != "/" ]]; do
if is_on_path "$current_dir/bin"; then
echo "$current_dir"
return 0
fi
current_dir=$(dirname "$current_dir")
done
return 1
}
This seems to work
echo $PATH | xargs -d: -n1 | grep ^$(readlink -f ../../bin)$ | wc -l
Perhaps the most reliable sulution would be to put a file with executable bit set to that directory and to use "which".
For example, is /bin on my PATH?
which ls
/bin/ls

Resources