I am looking for a simpler way of doing this:
#!/usr/bin/bash
FILE=core
DAYS=1
cd /programdir
if [ -f ${FILE} ]; then
agetest=$(find . -name "${FILE}" -type f -mtime +${DAYS} -print | wc -c)
if [[ agetest -eq 0 ]] ; then
echo "$FILE exists and is not older than ${DAYS} days."
fi
fi
I want to process a core file (using the dbx command) if the script finds it and the core file is recent (within 1 day). So I would run a dbx command where that echo statement is. It seems like there should be a way to do this in a more elegant way with 1 if statement, but I can't think of how to do that. Any ideas?
I know it would be easier to just clean up the old core files with tmpwatch or find/rm, but I'm not allowed to do that.
#!/usr/bin/bash
FILE=core
DAYS=1
if [ `find /programdir -name "${FILE}" -type f -mtime +${DAYS}` ]; then
echo "$FILE exists and is not older than ${DAYS} days."
fi
Related
I'm writing a simple program that downloads multiple images from multiple pages of a website. When trying to implement folder creation that has a naming structure similar to how the website is layed out, I ran into issues. Below is a bare bones example of what I used to replicate the behavior of my other program.
#!/bin/bash
# Sample inputs:
# http://testurl.com/post/1234
# http://testurl.com/post/5678
folder=""
if [[ $1 == *"post"* ]]; then
folder=${1##*/}
folder=${folder//[$'\t\r\n ']}
fi
if [[ $(find "$HOME" -name "*$folder*" -print -quit) ]]; then
echo 'Hi'
else
echo 'Bye'
fi
# Sample directories:
# /home/user/1234
# /home/user/0001
Everywhere I've looked tells me this should run perfectly. However, this does not run as it should and I've been at it for hours. Can anyone help me?
Bash version: GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)
this simplifies the test whether find found something, using standard grep rather than bashisms:
if find "$HOME" -type d -name "$folder" -print -quit | grep .; then
echo "Hi"
else
echo "Bye"
fi
i also changed two constraints for find:
only search for directories: -type d
so you don't get ordinary files
only search for paths where the basename (the last componenent of the full path) matches ${folder} exactly
so you don't get matches for the /home/user/12345 or /home/user/.emacs.d/auto-save-list/.saves-12350-localhost~
for practical reasons (once the script is known to work), i would discard the output of grep, by redirecting it to /dev/null)
if the directories are all directly in "${HOME}", you could also add -maxdepth 1 as the first argument to find (to not recurse into subdirectories).
so you end up with something like:
if find "$HOME" -maxdepth 1 -type d -name "$folder" -print -quit | grep . >/dev/null
then
echo "Hi"
else
echo "Bye"
fi
or simply use:
if [ -d "${HOME}/${folder}" ]; then
echo "Hi"
else
echo "Bye"
fi
I want to implement incremental backup in Ubuntu, so I am thinking of finding md5sum of all files from source and target and check if any two files have same md5sum then keep that file in destination else if different copy the file from source into directory.
I am thinking of doing this in bash
Can anyone help me with the commands of how to check md5sum of two files in different directories ?
Thanks in advance!!
#!/bin/bash
#
SOURCE="/home/pallavi/backup1"
DEST="/home/pallavi/BK"
count=1
TODAY=$(date +%F_%H%M%S)
cd "${DEST}" || exit 1
mkdir "${TODAY}"
while [ $count -le 1 ]; do
count=$(( $count + 1 ))
cp -R $SOURCE/* $DEST/$TODAY
mkdir "MD5"
cd ${DEST}/${TODAY}
for f in *;do
md5sum "${f}" >"${TODAY}${f}.md5"
echo ${f}
done
if [ $? -ne 0 ] && [[ $IGNORE_ERR -eq 0 ]]; then
#error or eof
echo "end of source or error"
break
fi
done
This is reinventing the wheel sort of thing.
There are some utility written for this kind of purpose, to name a few.
rsync
GNU cp(1) has the -u flag.
cp
For comparing files
cmp
diff
For finding duplicates
fdupes
rmlint
Here is what I've come up with re-inventing the wheel sort of thing.
#!/usr/bin/env bash
shopt -s extglob
declare -A source_array
while IFS= read -r -d '' files; do
read -r source_hash source_files < <(sha512sum "$files")
source_array["$source_hash"]="$source_files"
done < <(find source/ -type f -print0)
source=$( IFS='|'; printf '%s' "#(${!source_array[*]})" )
while IFS= read -r -d '' files0 ; do
read -r destination_hash destination_files < <(sha512sum "$files0")
if [[ $destination_hash == $source ]]; then
echo "$destination_files" FOUND from source/ directory
else
echo "$destination_files" NOT-FOUND from source/ directory
fi
done < <(find destination/ -type f -print0)
Should be safe enough from files with spaces and tabs and new lines, but since I don't have files with new lines so I can't really say.
Change the action from the if-else statement depending on what you want to do.
Ok maybe sha512sum is a bit over kill, change it to md5sum
Add set -x after the shebang to see what's actually being executed, good luck.
I needed to move a large s3 bucket to a local file store for a variety of reasons, and the files were stored as 160,000 directories with subdirectories.
As this is just far too many folders to look at with something like a gui FTP interface, I'd like to move the 160,000 root directories into, say, 320 directories - 500 directories in each.
I'm a newbie at bash scripting, and I just wrote this up, but I'm scared I'm going to mangle the whole thing and have to redo the transfer. I tested with [[ "$i" -ge 3 ]]; and some directories with subdirectories and it looked like it worked okay, but I'm quite nervous. Do not want to retransfer all this data.
i=0;
j=0;
for file in *; do
if [[ -d "$file" && ! -L "$file" ]];
then
((i++))
echo "directory $file is being written to assets_$j";
mv $file ./assets_$j/;
if [[ "$i" -ge 499 ]];
then
((j++));
((i=0));
fi
fi;
done
Thanks for the help!
find all the directories in the current folder.
Read a count of the folders.
Exec mv for each chunk
find . -mindepth 1 -maxdepth 1 -type d |
while IFS= readarray -n10 -t files && ((${#files[#]})); do
dest="./assets_$((j++))/"
echo mkdir -v -p "$dest"
echo mv -v "${files[#]}" "$dest";
done
On the condition that assets_1, assets_2, etc. do not exist in the working directory yet:
dirs=(./*/)
for (( i=0,j=1; i<${#dirs[#]}; i+=500,j++ )); do
echo mkdir ./assets_$j/
echo mv "${dirs[#]:i:500}" ./assets_$j/
done
If you're happy with the output, remove echos.
A possible way, but you have no control on the counter, is:
find . -type d -mindepth 1 -maxdepth 1 -print0 \
| xargs -0 -n 500 sh -c 'echo mkdir -v ./assets_$$ && echo mv -v "$#" ./assets_$$' _
This gets the counter of assets from the PID which only recycles when the wrap-around is reached (Linux PID recycling)
The order which findreturns is slight different then the glob * (Find command default sorting order)
If you want to have the sort order alphabetically, you can add a simple sort:
find . -type d -mindepth 1 -maxdepth 1 -print0 | sort -z \
| xargs -0 -n 500 sh -c 'echo mkdir -v ./assets_$$ && echo mv -v "$#" ./assets_$$' _
note: remove the echo if you are pleased with the output
I've written a script to iterate though a directory in Solaris. The script looks for files which are older than 30 minutes and echo. However, my if condition is always returning true regardless how old the file is. Someone please help to fix this issue.
for f in `ls -1`;
# Take action on each file. $f store current file name
do
if [ -f "$f" ]; then
#Checks if the file is a file not a directory
if test 'find "$f" -mmin +30'
# Check if the file is older than 30 minutes after modifications
then
echo $f is older than 30 mins
fi
fi
done
You should not parse the output of ls
You invoke find for every file which is unnecessarily slow
You can replace your whole script with
find . -maxdepth 1 -type f -mmin +30 | while IFS= read -r file; do
[ -e "${file}" ] && echo "${file} is older than 30 mins"
done
or, if your default shell on Solaris supports process substitution
while IFS= read -r file; do
[ -e "${file}" ] && echo "${file} is older than 30 mins"
done < <(find . -maxdepth 1 -type f -mmin +30)
If you have GNU find available on your system the whole thing can be done in one line:
find . -maxdepth 1 -type f -mmin +30 -printf "%s is older than 30 mins\n"
Another option would be to use stat to check the time. Something like below should work.
for f in *
# Take action on each file. $f store current file name
do
if [ -f "$f" ]; then
#Checks if the file is a file not a directory
fileTime=$(stat --printf "%Y" "$f")
curTime=$(date +%s)
if (( ( ($curTime - $fileTime) / 60 ) < 30 ))
echo "$f is less than 30 mins old"
then
echo "$f is older than 30 mins"
fi
fi
done
Since you are iterating through a directory you could try the below command which will find all files ending with log type edited in the past 30 min. Using:
-mmin +30 would give all files edited before 30 minutes ago
-mmin -30 would give all files that have changed within the last 30 minutes
find ./ -type f -name "*.log" -mmin -30 -exec ls -l {} \;
Here's my problem: I have to resolve various filenames/locations (data directory may have sub-directories) which are user-configurable. If I can resolve the filename completely prior to the loop the following script works:
[prompt] more test.sh
#! /usr/bin/env bash
newfile=actual-filename
for directory in `find -L ${FILE_PATH}/data -type d`; do
for filename in `ls -1 ${directory}/${newfile} 2>/dev/null`; do
if [[ -r ${filename} ]]; then
echo "Found ${filename}"
fi
done
done
[prompt] ./test.sh
[prompt] Found ${SOME_PATH}/actual-filename
However, if newfile has any wildcarding in it the inner-loop will not run. Even if it only returns a single file.
I would use find with some regex, but to auto-generate the proper expressions and do the substitutions for somethings will be tricky (e.g. pgb.f0010{0930,1001}{00,06,12,18} would correlate to some files associate with Sep. 30 and Oct 1 of 2010 the first grouping is computed by my script for a provided date).
pgb.f0010093000 pgb.f0010093006 pgb.f0010093012 pgb.f0010093018 pgb.f0010100100
pgb.f0010100106 pgb.f0010100112 pgb.f0010100118
I'm running Fedora 15 64-bit.
newfile="*"
find -L ${FILE_PATH}/data -name "${newfile}" \
| while read filename
do
if [[ -r ${filename} ]]; then
echo "Found ${filename}"
fi
done
-or-
newfile="*"
find -L ${FILE_PATH}/data -name "${newfile}" -readable -exec echo "Found {}" \;
-or with regular expressions-
newfile='.*/pgb.f0010(0930|1001)(00|06|12|18)'
FILE_PATH=.
find -L ${FILE_PATH}/. -regextype posix-extended \
-regex "${newfile}" -readable -exec echo "Found {}" \;
The root problem is that there is a dependence on shell expansion in the broken script. Use eval:
#! /usr/bin/env bash
FILE_PATH="."
newfile=pgb.f0010{0930,1001}{00,06,12,18}
for directory in `find -L ${FILE_PATH}/data -type d`; do
for filename in `eval ls -1 ${directory}/${newfile} 2>/dev/null`; do
if [[ -r ${filename} ]]; then
echo "Found ${filename}"
fi
done
done