I am working on a cron job, to check and recover ark files if required. I need to get the biggest file size of .ark files and if .TheIsland.ark file is smaller it will auto backup and copy over the biggest size. Now while I have this working out side of crontab one part of the script fails.
Which is:
actualmap=$(find $PWD -type f -printf '%p\n' -name "*.ark"| sort -nr | head -1)
If I remove the \n it actually works but then it cannot sort between them as it is not in separate lines.
The output I get on cron job with \n is:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/recovery.sh (which is the recovery script)
The same line of code ran in terminal produces the correct output of:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_NewLaunchBackup.bak
Without \n using crontab I get:
/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_27.06.2019_21.46.20.ark/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/TheIsland_28.06.2019_15.15.34.ark
I have attached full code which works manually.
#!/bin/bash
export DISPLAY=:0.0
##ARK Map Recovery Script
cd /srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks
#Check file size of current ark map
file=TheIsland.ark
echo $file
currentsize=$(wc -c <"$file")
echo $currentsize
#Find biggest map file.
actualmap=$(find $PWD -type f -printf '%p\n' -name "*.ark"| sort -nr | head -1)>/srv/daemon-data/da4aaa1b-0ce9-46d2-bd60-5f599cc089ae/ShooterGame/Saved/SavedArks/log.txt
echo $PWD
echo $actualmap
biggestsize=$(wc -c < "$actualmap")
echo $biggestsize
if [ $currentsize -ge $biggestsize ]; then
echo No map recovery required as over $biggestsize bytes
else
echo Uh Oh! size is under $biggestsize bytes Attempting map recovery
echo Checking for Backup dir and creating if necessary
mkdir -p BackupFiles
#Move old map into backup dir in the saved location
echo Moving old Map File to backup dir
mv $file BackupFiles
#Stop server using docker commands
echo Stopping servers
docker kill da4aaa1b-0ce9-46d2-bd60-5f599cc089ae
#Copy biggest map file with correct name
echo Copying backup file
cp $actualmap $file
fi
Using the -printf option to the find command is not required here. -print will do just fine.
I obtain the result you want (find returning the found filenames, one per line) with this:
find $PWD -type f -name "*.ark" -print.
With -printf, %p gives you the filename anyway.
From man find: -print True; print the full file name on the standard output, followed by a newline.
Option -print already does what you want to do.
Related
Here is my problem : I am trying to parse a lot of files on a system to find some tokens. My tokens are stored in a file, one token on each line (for example token.txt). My path to parse are also stored in an other file, one path on each line (for example path.txt).
I use a combination of find and grep to do my stuff. Here is one attempt:
for path in $(cat path.txt)
do
for line in $(find $path -type f -print0 | xargs -0 grep -anf token.txt 2>/dev/null);
do
#some stuffs here
done
done
It seems to work fine, I don't really know if there is an other way to make it faster though (I am a beginner in programmation and shell).
My problem is : For each file found by the find command, I want to get all the files that are compressed. For this, I wanted to use the file command. The problem is that I need the output of the find command for both grep and file.
What is the best way to achieve this ? To summarize my problem, I would like something like this :
for path in $(cat path.txt)
do
for line in $(find $path -type f);
do
#Use file command to test all the files, and do some stuff
#Use grep to find some tokens in all the files, and do some stuff
done
done
I don't know if my explanations are clear, I tried my best.
EDIT : I read that doing for loop to read a file is bad, but some people claims that doing while read loop is also bad. I am a bit lost to be honest, I can't really find the proper way to do my stuffs.
The way you are doing it is fine, but here is another way to do it. With this method you won't have to add additional loops to iterate of each item in your configuration files. There are ways to simplify this further, but it would not be as readable.
To test this:
In "${DIR}/path" I have two directories listed (one on each line). Both directories are contained in the same parent directory as this script. In the "${DIR}/token" file, I have three tokens (one on each line) to search for.
#!/usr/bin/env bash
#
# Directory where this script is located
#
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
#
# Loop through each file contained in our path list
#
for f in $(find $(cat "${DIR}/path") -type f); do
for c in $(cat "${f}" | grep "$(cat ${DIR}/token)"); do
echo "${f}"
echo "${c}"
# Do your file command here
done
done
I think you need something like this:
find $(cat places.txt) -type f -exec bash -c 'file "$1" | grep -q compressed && echo $1 looks compressed' _ {} \;
Sample Output
/Users/mark/tmp/a.tgz looks compressed
This script is looking in all the places listed in places.txt and running a new bash shell for each file it finds. Inside the bash shell it is testing if the file is compressed and echoing a message if it is - I guess you will do something else but you don't say what.
Another way of writing that more verbosely if you have lots to do:
#!/bin/bash
while read -r d; do
find "$d" -type f -exec bash -c '
file "$1" | grep -q "compressed"
if [ $? -eq 0 ]; then
echo "$1" is compressed
else
echo "$1" is not compressed
fi' _ {} \;
done < <(cat places.txt)
The history of this problem is:
I have millions of files and directories on a NAS system. I found a count of 1,095,601 empty (0 byte) files. These files used to have data but were destroyed by a predecessor not using the correct toolsets to migrate data between an XSAN and this Isilon NAS.
The files were media production data, like fonts, pdfs and image files. They are no longer useful beyond the history of their existence. Before I proceed to delete them, the production user's need a record of which files used to exist, so when they browse a project folder, they can use the unaffected files but then refer to a text file in the same directory which records which files used to also be there and thus provide reason as to why certain reference files are broken.
So how do I find files across multiple directories and delete them but first output their filename to a text file which would be saved to each relevant path location?
I am thinking along the lines of:
for file in $(find . -type f -size 0); do
echo "$file" >> /PATH/TO/FOUND/FILE/PARENT/DIR/deletedFiles.txt -print0 |
xargs -0 rm ;
done
To delete each empty file while leaving behind a file called deletedFiles.txt which contains the names of the deleted files, try:
PATH=/bin:/usr/bin find . -empty -type f -execdir bash -c 'printf "%s\n" "$#" >>deletedFiles.txt' none {} + -delete
How it works
PATH=/bin:/usr/bin
This sets a temporary but secure path.
find .
This starts find looking in the current directory
-empty
This tells find to only look for empty files
-type f
This restricts find to looking for regular files.
-execdir bash -c 'printf "%s\n" "$#" >>deletedFiles.txt' none {} +
In each directory that contains an empty file, this adds the name of each empty file to the file deletedFiles.txt.
Notice the peculiar use of none in the command:
bash -c 'printf "%s\n" "$#" >>deletedFiles.txt' none {} +
When this command is run, bash will execute the string printf "%s\n" "$#" >>deletedFiles.txt and the arguments that follow that string are assigned to the positional parameters: $0, $1, $2, etc. When we use $#, it does not include $0. It, as is usual, expands to $1, $2, .... Thus, we add the placeholder none so that the placeholder is assigned is the $0, which we will ignore, and the complete list of file names are assigned to "$#".
-delete
This deletes each empty file.
Why not simply
find . -type f -size 0 -exec rm -v + |
sed -e 's%^removed .\./%%' -e 's/.$//' >deletedFiles.txt
If your find is too old to support -exec ... + you'll need to revert to -exec rm -v {} \; or refactor to
find . -type f -size 0 -print0 |
xargs -r -0 rm -v |
sed -e 's%^removed .\./%%' -e 's/.$//' >deletedFiles.txt
The brief sed script is to postprocess the output from rm -v which looks like
removed ‘./bar’
removed ‘./foo’
(with some funny quote characters around the file name) on my system. If you are fine with that output, of course, just omit the sed script from the pipeline.
If you know in advance which directories contain empty files, you can run the above snippet individually in those directories. Assuming you saved the snippet above as a script (with a proper shebang and execute permissions) named find-empty, you could simply use
for path in /path/to/first /path/to/second/directory /path/to/etc; do
cd "$path" && find-empty
done
This will only work if you have absolute paths (if not, you can run the body of the loop in a subshell by adding parentheses around it).
If you want to inspect all the directories in a tree, change the script to print to standard output instead (remove >deletedFiles.txt from the script) and try something like
find /path/to/tree -type d -exec sh -c '
t=$(mktemp -t find-emptyXXXXXXXX)
cd "$1" &&
find-empty | grep . >"$t" &&
mv "$t" deletedFiles.txt ||
rm "$t"' _ {} \;
This uses a temporary file so as to avoid updating the timestamp of directories which do not contain any empty files. The grep . is used purely for side effect; if any (non-empty) lines are printed, it will return success, whereas otherwise, it will report failure; this way, we know whether or not to move the temporary file to the target directory.
With prompting from #JonathanLeffler I have succeeded with the following:
#!/bin/bash
## call this script with: find . -type f -empty -exec handleEmpty.sh {} +
for file in "$#"
do
file2="$(basename "$file")"
echo "$file2" >> "$(dirname "$file")"/deletedFiles.txt
rm "$file"
done
This means I retain a trace of the removed files in a deletedFiles.txt flag file in each respective directory for the users to see when files are missing. That way, they can pursue going back to archive CD's to retrieve these deleted files, which are hopefully not 0 byte files.
Thanks to #John1024 for the suggestion of using the empty flag rather than size.
I need to do a bash command that will look through every home directory on a system, and copy the contents of the .forward file to a single file, along with copying the name of the directory it just copied from. So for example the final file would be something like forwards.txt and listeings would be
/home/user1
user1#email.com
/home/user2
user2#email.com
I've used this to list them to screen.
find /home -name '*' | cat /home/*/.forward
and it will print out the forward in each file but I'm not getting it to prefix it with which home directory it came from. Would I need to use a loop to do this? I had this test loop,
#!/bin/bash
for i in /home/*
do
if [ -d $i ]
then
pwd >> /tmp/forwards.txt
cat /home/*/.forward >> /tmp/forwards.txt
fi
done
But it went through the four home directories on my test setup and the forwards.txt file had the following listed four times.
/tmp
user1#email.com
user2#email.com
user3#email.com
user3#email.com
Thanks.
There is corrected version of your script:
#!/bin/bash
for i in /home/*
do
if [ -f "$i/.forward" ]
then
echo "$i" >> /tmp/forwards.txt
cat "$i/.forward" >> /tmp/forwards.txt
fi
done
Some points:
we checks for presents of .forward file inside home directory instead of existence of home directory itself
on each iteration $i contains name of home directory (like /home/user1). So we use its value instead of output of pwd command which always returns current directory (it doesn't change in our case)
instead of /home/*/.forward we use "/home/$i/.forward" because * after substitution gives to us all directories, while we need only current
Another, shortest version of this script may looks like this:
find /home -type f -name '.forward' | while read F; do
dirname "$F" >>/tmp/forwards.txt
cat "$F" >>/tmp/forwards.txt
done
I would write
for fwd in /home/*/.forward; do
dirname "$fwd"
cat "$fwd"
done > forwards.txt
A one liner (corrected):
find /home -maxdepth 2 -name ".forward" -exec echo "{}" >> /tmp/forwards.txt \; -exec cat "{}" >> /tmp/forwards.txt \;
This will output:
/home/user1/.forward
a#a.a
b#b.b
/home/user2/.forward
a#b.c
I have a number of clients running a piece of software within their public_html directory. The software includes a file named version.txt that contains the version number of their software (the number and nothing else).
I want to write a bash script that will look for a file named version.txt directly within every user's /home/xxx/public_html/ and output both the path to the file, and the contents of the file, i.e:
/home/matt/public_html/version.txt: 3.4.07
/home/john/public_html/version.txt: 3.4.01
/home/sam/public_html/version.txt: 3.4.03
So far all I have tried is:
#!/bin/bash
for file in 'locate "public_html/version.txt"'
do
echo "$file"
cat $file
done
But that does not work at all.
find /home -type f -path '*public_html/version.txt' -exec echo {} " " `cat {}` \;
Might work for you, but you can go without echo and cat ("tricking" grep):
find /home -type f -path '*public_html/version.txt' -exec grep -H "." {} \;
Or do it using find:
find /home -name "*/public_html/version.txt" -exec grep -H "" {} \;
for i in /home/*/public_html/version.txt; do
echo $i
cat $i
done
will find all the relevant files (using shell wildcarding), echo the filename out and cat out the file.
If you want a more concise output, you should investigate grep and replace the echo/cat with an appropriate regular expression e.g.
grep "[0-9]\.[0-9]" $i
I want to write a bash script that (recursively) processes all files of a certain type.
I know I can get the matching file list by using find thusly:
find . -name "*.ext"
I want to use this in a script:
recursively obatin list of files with a given extension
obtain the full file pathname
pass the full pathname to another script
Check the return code from the script. If non zero, log the name of the file that could not be processed.
My first attempt looks (pseudocode) like this:
ROOT_DIR = ~/work/projects
cd $ROOT_DIR
for f in `find . -name "*.ext"`
do
#need to lop off leading './' from filename, but I havent worked out how to use
#cut yet
newname = `echo $f | cut -c 3
filename = "$ROOT_DIR/$newname"
retcode = ./some_other_script $filename
if $retcode ne 0
logError("Failed to process file: $filename")
done
This is my first attempt at writing a bash script, so the snippet above is not likely to run. Hopefully though, the logic of what I'm trying to do is clear enough, and someone can show how to join the dots and convert the pseudocode above into a working script.
I am running on Ubuntu
find . -name '*.ext' \( -exec ./some_other_script "$PWD"/{} \; -o -print \)
Using | while read to iterate over file names is fine as long as there are no files with carrier return to be processed:
find . -name '*.ext' | while IFS=$'\n' read -r FILE; do
process "$(readlink -f "$FILE")" || echo "error processing: $FILE"
done