Find files and tar in a for if statement - bash

Trying to create an if statement which will search with a username in a location to see if a tar has been done already. If not create said tar in a location. For some reason, my find is running through the then echo command regardless if there is a file in that location or not.
USER_LIST="$(awk '{print $3}' usernamefile.txt)"
for USER_NAME in $USER_LIST;do
echo $USER_NAME
if find /location/to/store/tarfile -type f -iname $USER_NAME;
then
echo "tar file has been found for" $USER_NAME "/location/to/store/tarfile" `date` >> /logfile/log.txt
else
FILE_LOC="$(awk -v $USER_NAME=$3 '{print $5;}' usernamefile.txt)"
tar -czvf ${USER_NAME}.tar.gz /location/to/put/tar/file $FILE_LOC
echo "tar exit code:" $? $USER_NAME "has been archived" `date` >> /logfile/log.txt
fi
done
I'm not sure why but if the find doesn't find anything. Surely it should move onto the else part of the script? the plan is creating tar files such as <username>.tar.gz

It appears that using test command would work best for my needs with either a -f or a -e flag.

The point is that you don't know how to capture the result of find: this gives a list of files, obeying the conditions you add in the find statement. So, best thing to do, is to count them: if there are none, then there are no such files, so you get something like:
if $(find /location/to/store/tarfile -type f -iname $USER_NAME | wc -l)
then
...
The | wc -l reads the number of found files. If a file has been found, the result is not equal to 0, so it gets interpreted as TRUE.

Related

Issue with shell script running on crontab \n doesnt work

I am working on a cron job, to check and recover ark files if required. I need to get the biggest file size of .ark files and if .TheIsland.ark file is smaller it will auto backup and copy over the biggest size. Now while I have this working out side of crontab one part of the script fails.
Which is:
actualmap=$(find $PWD -type f -printf '%p\n' -name "*.ark"| sort -nr | head -1)
If I remove the \n it actually works but then it cannot sort between them as it is not in separate lines.
The output I get on cron job with \n is:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/recovery.sh (which is the recovery script)
The same line of code ran in terminal produces the correct output of:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_NewLaunchBackup.bak
Without \n using crontab I get:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_27.06.2019_21.46.20.ark/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_28.06.2019_15.15.34.ark
I have attached full code which works manually.
#!/bin/bash
export DISPLAY=:0.0
##ARK Map Recovery Script
cd /srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks
#Check file size of current ark map
file=TheIsland.ark
echo $file
currentsize=$(wc -c <"$file")
echo $currentsize
#Find biggest map file.
actualmap=$(find $PWD -type f -printf '%p\n' -name "*.ark"| sort -nr | head -1)>/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/log.txt
echo $PWD
echo $actualmap
biggestsize=$(wc -c < "$actualmap")
echo $biggestsize
if [ $currentsize -ge $biggestsize ]; then
echo No map recovery required as over $biggestsize bytes
else
echo Uh Oh! size is under $biggestsize bytes Attempting map recovery
echo Checking for Backup dir and creating if necessary
mkdir -p BackupFiles
#Move old map into backup dir in the saved location
echo Moving old Map File to backup dir
mv $file BackupFiles
#Stop server using docker commands
echo Stopping servers
docker kill da4aaa1b-0ce9-46d2-bd60-5f599cc089ae
#Copy biggest map file with correct name
echo Copying backup file
cp $actualmap $file
fi
Using the -printf option to the find command is not required here. -print will do just fine.
I obtain the result you want (find returning the found filenames, one per line) with this:
find $PWD -type f -name "*.ark" -print.
With -printf, %p gives you the filename anyway.
From man find: -print True; print the full file name on the standard output, followed by a newline.
Option -print already does what you want to do.

c shell script: find directory and rename the output of find

i am still new in this shell script. I have a task that given to me which i have difficulty to execute it.
The task is i have a lot of directory which is based on ddmmyy.
03Mar2014 08Aug2013 11Jan2015 16Jan2014 22Feb2014 26Mar2014
03Nov2013 08Jan2014 11Jul2013 16Jul2013 22Jul2013 26Oct2014
03Oct2013 08Jan2015 11Nov2014 16May2014 22Mar2014 26Sep2013
The task is to make the directory to mmyy.
So far, my code is
foreach file(`find . -type d | awk -F/ 'NF == 3'`)
echo $file
set newmove = `echo $file | cut -c 1-2,5-`
echo $newmove
mv $file $newmove
output:
for find . -type d | awk -F/ 'NF == 3':
./24Jan2015/W51A
`echo $file | cut -c 1-2,5-`
./Jan2015/W51A
mv $file $newmove
mv: cannot rename ./24Jan2015/W51A to ./Jan2015/W51A: No such file or directory
but the script didnt work.
Do you guys have any idea how to do this?
First of all, the issue is you're renaming the file but actually tryng to rename the directory, hence the error.
As I understood the idea is to rename the folders in the current working directory to the desired format and by this actually merge the content of the folders from the same mmYYYY format to the new one (since 11Jan2015 and 16Jan2014 will be renamed both to Jan2014)
you can try this:
foreach dir ( `ls` )
set newdir = `echo $dir| cut -c 3-`
mkdir -p $newdir
mv -f $dir/* $newdir
rmdir $dir
end
-p will create the folder and will do nothing if the folder already exists.
Some assumptions:
The folders are at the same place which is pwd
There are always two leading digits in date
You want to merge the folders content of the same mmYYYY format
The files with the same name in different folders will be overwritten
There are only folders in pwd (add check if it's not the case)
In case these folders are in different places (which is not the case according to your output: ./24Jan2015 ) and collision is not an issue - the code should be changed to :
Use find
Create the new folder with the correct path
No merge and overwrite will occur, so 1,3,4 and 5 are not relevant.
UPDATE:
After additional input - if I understand correctly your find is looking only for the folders of depth 3. I can't say why but you can achieve the same much faster with
find . -type d -mindepth=2 -maxdepth=2
The output is the list of the folders that have subfolders.
Then you need to get the second folder's name (assuming it will always be of the expected format).
set olddir = `echo $file| cut -f 1-3 -d '/'`
set newdir = `echo $olddir | cut -c 1-2,5-`
and finally
foreach file(`find . -type d -mindepth=2 -maxdepth=2`)
set olddir = `echo $file| cut -f 1-3 -d '/'`
set newdir = `echo $olddir | cut -c 1-2,5-`
mkdir -p $newdir
mv -f $file $newdir
end
This will also handle the case if two folders were found under the same path.
UPDATE 2:
Since the script will run on Unix - the following updates should be made:
Original find was returned since unix find lacks the mindepth/maxdepth options
We should try to remove the olddir to cleanup the empty folders - it will fail if the folder is not empty but the script should continue to run
foreach file(`find . -type d | awk -F/ 'NF == 3'`)
set olddir = `echo $file| cut -f 1-2 -d '/'`
set newdir = `echo $olddir | cut -c 1-2,5-`
set dir_name=`basename "$file"`
if ( -d "$newdir/$dir_name" ) then
mv -f $file/* $newdir/$dir_name/
else
mkdir -p $newdir
mv -f $file $newdir
endif
rmdir $olddir
end
I really think c shell is the wrong tool for just about anything that involves programming. That said, this looks like it would do what you want with only a little help from an external tool:
#!/bin/csh
foreach file ([0-9][0-9][A-Z][a-z][a-z][0-9][0-9][0-9][0-9])
set new = `echo $file:q | cut -c 3-`
if ( -d "$new" ) then
echo "skipping $file because $new already exists"
continue
endif
mv -v "$file" "$new"
end
Note the glob that matches your list of directories to rename. This script isn't bothering to confirm whether the matched files ARE in fact directories, so if there's the possibility they might not be, you should account for that somehow.
Note that we are using a back-quoted expression to use an external tool, cut to grab a substring from each directory name. We use this (as you did in your question) because CSH IS NOT A PROGRAMMING LANGUAGE, and has no string processing capabilities of its own.
The if statement within the loop will skip any directory whose target already exists. So for example, if you were to go through the top row of your input in your question, 26Mar2014 would be converted to Mar2014, which already exists due to 03Mar2014. Since you haven't specified how this should be handled, this script skips that condition.

Please help. I need to add a log file to this multi-threaded rsync script

Define source, target, maxdepth and cd to source
source="/media"
target="/tmp"
depth=20
cd "${source}"
Set the maximum number of concurrent rsync threads
maxthreads=5
How long to wait before checking the number of rsync threads again
sleeptime=5
Find all folders in the source directory within the maxdepth level
find . -maxdepth ${depth} -type d | while read dir
do
Make sure to ignore the parent folder
if [ `echo "${dir}" | awk -F'/' '{print NF}'` -gt ${depth} ]
then
Strip leading dot slash
subfolder=$(echo "${dir}" | sed 's#^\./##g')
if [ ! -d "${target}/${subfolder}" ]
then
Create destination folder and set ownership and permissions to match source
mkdir -p "${target}/${subfolder}"
chown --reference="${source}/${subfolder}" "${target}/${subfolder}"
chmod --reference="${source}/${subfolder}" "${target}/${subfolder}"
fi
Make sure the number of rsync threads running is below the threshold
while [ `ps -ef | grep -c [r]sync` -gt ${maxthreads} ]
do
echo "Sleeping ${sleeptime} seconds"
sleep ${sleeptime}
done
Run rsync in background for the current subfolder and move one to the next one
nohup rsync -au "${source}/${subfolder}/" "${target}/${subfolder}/"
</dev/null >/dev/null 2>&1 &
fi
done
Find all files above the maxdepth level and rsync them as well
find . -maxdepth ${depth} -type f -print0 | rsync -au --files-from=- --from0 ./ "${target}/"
Thank you for all your help. By adding the -v switch to rsync, I solved the problem.
Not sure if this is what you are after (I don't know what rsync is), but can you not just run the script as,
./myscript > logfile.log
or
./myscript | tee logfile.log
(ie: pipe to tee if you want to see the output as it goes along)?
Alternatively... not sure this is what real coders do, but you could append the output of each command in the script to a logfile, eg:
#at the beginning define the logfile name:
logname="logfile"
#remove the file if it exists
if [ -a ${logfile}.log ]; then rm -i ${logfile}.log; fi
#for each command that you want to capture the output of, use >> $logfile
#eg:
mkdir -p "${target}/${subfolder}" >> ${logfile}.log
If rsync has several threads with names, I imagine you could store to separate logfiles as >> ${logfile}${thread}.log and concatenate the files at the end into 1 logfile.
Hope that is helpful? (am new to answering things - so I apologise if what I post is basic/bad, or if you already considered these ideas!)

How to copy and rename files in shell script

I have a folder "test" in it there is 20 other folder with different names like A,B ....(actually they are name of people not A, B...) I want to write a shell script that go to each folder like test/A and rename all the .c files with A[1,2..] and copy them to "test" folder. I started like this but I have no idea how to complete it!
#!/bin/sh
for file in `find test/* -name '*.c'`; do mv $file $*; done
Can you help me please?
This code should get you close. I tried to document exactly what I was doing.
It does rely on BASH and the GNU version of find to handle spaces in file names. I tested it on a directory fill of .DOC files, so you'll want to change the extension as well.
#!/bin/bash
V=1
SRC="."
DEST="/tmp"
#The last path we saw -- make it garbage, but not blank. (Or it will break the '[' test command
LPATH="/////"
#Let us find the files we want
find $SRC -iname "*.doc" -print0 | while read -d $'\0' i
do
echo "We found the file name... $i";
#Now, we rip off the off just the file name.
FNAME=$(basename "$i" .doc)
echo "And the basename is $FNAME";
#Now we get the last chunk of the directory
ZPATH=$(dirname "$i" | awk -F'/' '{ print $NF}' )
echo "And the last chunk of the path is... $ZPATH"
# If we are down a new path, then reset our counter.
if [ $LPATH == $ZPATH ]; then
V=1
fi;
LPATH=$ZPATH
# Eat the error message
mkdir $DEST/$ZPATH 2> /dev/null
echo cp \"$i\" \"$DEST/${ZPATH}/${FNAME}${V}\"
cp "$i" "$DEST/${ZPATH}/${FNAME}${V}"
done
#!/bin/bash
## Find folders under test. This assumes you are already where test exists OR give PATH before "test"
folders="$(find test -maxdepth 1 -type d)"
## Look into each folder in $folders and find folder[0-9]*.c file n move them to test folder, right?
for folder in $folders;
do
##Find folder-named-.c files.
leaf_folder="${folder##*/}"
folder_named_c_files="$(find $folder -type f -name "*.c" | grep "${leaf_folder}[0-9]")"
## Move these folder_named_c_files to test folder. basename will hold just the file name.
## Don't know as you didn't mention what name the file to rename to, so tweak mv command acc..
for file in $folder_named_c_files; do basename=$file; mv $file test/$basename; done
done

How can I manipulate file names using bash and sed?

I am trying to loop through all the files in a directory.
I want to do some stuff on each file (convert it to xml, not included in example), then write the file to a new directory structure.
for file in `find /home/devel/stuff/static/ -iname "*.pdf"`;
do
echo $file;
sed -e 's/static/changethis/' $file > newfile +".xml";
echo $newfile;
done
I want the results to be:
$file => /home/devel/stuff/static/2002/hello.txt
$newfile => /home/devel/stuff/changethis/2002/hello.txt.xml
How do I have to change my sed line?
If you need to rename multiple files, I would suggest to use rename command:
# remove "-n" after you verify it is what you need
rename -n 's/hello/hi/g' $(find /home/devel/stuff/static/ -type f)
or, if you don't have rename try this:
find /home/devel/stuff/static/ -type f | while read FILE
do
# modify line below to do what you need, then remove leading "echo"
echo mv $FILE $(echo $FILE | sed 's/hello/hi/g')
done
Are you trying to change the filename? Then
for file in /home/devel/stuff/static/*/*.txt
do
echo "Moving $file"
mv "$file" "${file/static/changethis}.xml"
done
Please make sure /home/devel/stuff/static/*/*.txt is what you want before using the script.
First, you have to create the name of the new file based on the name of the initial file. The obvious solution is:
newfile=${file/static/changethis}.xml
Second you have to make sure that the new directory exists or create it if not:
mkdir -p $(dirname $newfile)
Then you can do something with your file:
doSomething < $file > $newfile
I wouldn't do the for loop because of the possibility of overloading your command line. Command lines have a limited length, and if you overload it, it'll simply drop off the excess without giving you any warning. It might work if your find returns 100 file. It might work if it returns 1000 files, but it might fail if your find returns 1000 files and you'll never know.
The best way to handle this is to pipe the find into a while read statement as glenn jackman.
The sed command only works on STDIN and on files, but not on file names, so if you want to munge your file name, you'll have to do something like this:
$newname="$(echo $oldname | sed 's/old/new/')"
to get the new name of the file. The $() construct executes the command and puts the results of the command on STDOUT.
So, your script will look something like this:
find /home/devel/stuff/static/ -name "*.pdf" | while read $file
do
echo $file;
newfile="$(echo $file | sed -e 's/static/changethis/')"
newfile="$newfile.xml"
echo $newfile;
done
Now, since you're renaming the file directory, you'll have to make sure the directory exists before you do your move or copy:
find /home/devel/stuff/static/ -name "*.pdf" | while read $file
do
echo $file;
newfile="$(echo $file | sed -e 's/static/changethis/')"
newfile="$newfile.xml"
echo $newfile;
#Check for directory and create it if it doesn't exist
$dirname=$(dirname "$newfile")
if [ ! -d "$dirname" ]
then
mkdir -p "$dirname"
fi
#Directory now exists, so you can do the move
mv "$file" "$newfile"
done
Note the quotation marks to handle the case there's a space in the file name.
By the way, instead of doing this:
if [ ! -d "$dirname" ]
then
mkdir -p "$dirname"
fi
You can do this:
[ -d "$dirname"] || mkdir -p "$dirname"
The || means to execute the following command only if the test isn't true. Thus, if [ -d "$dirname" ] is a false statement (the directory doesn't exist), you run mkdir.
It's a fairly common shortcut when you see shell scripts.
find ... | while read file; do
newfile=$(basename "$file").xml;
do something to "$file" > "$somedir/$newfile"
done
OUTPUT="$(pwd)";
for file in `find . -iname "*.pdf"`;
do
echo $file;
cp $file $file.xml
echo "file created in directory = {$OUTPUT}"
done
This will create a new file with name whatyourfilename.xml, for hello.pdf the new file created would be hello.pdf.xml, basically it creates a new file with .xml appended at the end.
Remember the above script finds files in the directory /home/devel/stuff/static/ whose file names match the matcher string of the find command (in this case *.pdf), and copies it to your present working directory.
The find command in this particular script only finds files with filenames ending with .pdf If you wanted to run this script for files with file names ending with .txt, then you need to change the find command to this find /home/devel/stuff/static/ -iname "*.txt",
Once I wanted to remove trailing -min from my files. i.e. wanted alg-min.jpg to turn into alg.jpg. so after some struggle, managed to figure something like this:
for f in *; do echo $f; mv $f $(echo $f | sed 's/-min//g');done;
Hope this helps someone willing to REMOVE or SUBTITUDE some part of their file names.

Resources