Backup Yesterday Files in Folder - bash

I develop following script to gzip yesterday files in a directory, any improvements suggestions
Yesterday=`TZ=GMT+24 date +%d-%b-%y`;
mkdir $Yesterday
mv /tmp/logs/servicemix.log.* /tmp/logs/$Yesterday
for File in /tmp/logs/$Yesterday/app.log.*;
do gzip $File;
done
Regards

1.Replace
mkdir $Yesterday
by
mkdir -p /tmp/logs/${Yesterday}
gzip the files you have moved
When you move servicemix* files, do not gzip app* files

Use following lines of codes
TIME=`date +"%b-%d-%y"` # This Command will add date in Backup File Name.
now=$(date +"%T")
FILENAME="filename-$TIME:$now.tar.gz" # Here i define Backup file name format.
SRCDIR="/home" # Location of Important Data Directory (Source of backup).
DESDIR="/home/user/backup" # Destination of backup file.
tar -cpzf $DESDIR/$FILENAME $SRCDIR #creating backup as tar file
echo
echo "Backup finished"

Were they changed or created yesterday? Use find with the correct modifier.
Yesterday=`TZ=GMT+24 date +%d-%b-%y`;
mkdir $Yesterday
# -1 is equal to last 24 hours
# ctime is for last inode modified time. If you need creation date you need a different modifier. See attached link.
find -ctime -1 -exec mv '{}' "$yesterday/" \;
But this is i think pretty much the best option: Use 'find' to determine files modified yesterday.

Related

Shell script to archive & delete files older than 5 days based on created date of the files

I am trying to compress 5 days' worth log at a time and moving the compressed files to another location and deleting the logs files from original location. I need bash script to accomplish this. I got the files compressed using the below command, but not able to move them to the archive folder. I also need to compress based on date created. Now it's compressing all the files starting with a specific name.
#!/bin/bash
cd "C:\Users\ann\logs"
for filename in acap*.log*; do
# this syntax emits the value in lowercase: ${var,,*} (bash version 4)
mkdir -p archive
gzip "$filename_.zip" "$filename"
mv "$filename" archive
done
#!/bin/bash
mkdir -p archive
for file in $(find . -mtime +3 -type f -printf "%f ")
do
if [[ "$file" =~ ^acap.*\.log$ ]]
then
tar -czf archive/${file}.tar.gz $file
rm $file
fi
done
This finds all files in the current directory that match the regex and compresses them in an tar for every file. Then it deletes all the files.

Bash script of unzipping unknown name files

I have a folder that after an rsync will have a zip in it. I want to unzip it to its own folder(if the zip is L155.zip, to unzip its content to L155 folder). The problem is that I dont know it's name beforehand(although i know it will be "letter-number-number-number"), so I have to unzip an uknown file to its unknown folder and this to be done automatically.
The command “unzip *”(or unzip *.zip) works in terminal, but not in a script.
These are the commands that have worked through terminal one by one, but dont work in a script.
#!/bin/bash
unzip * #also tried .zip and /path/to/file/* when script is on different folder
i=$(ls | head -1)
y=${i:0:4}
mkdir $y
unzip * -d $y
First I unzip the file, then I read the name of the first extracted file through ls and save it in a variable.I take the first 4 chars and make a directory with it and then again unzip the files to that specific folder.
The whole procedure after first unzip is done, is because the files inside .zip, all start with a name that the zip already has, so if L155.ZIP is the zip, the files inside with be L155***.txt.
The zip file is at /path/to/file/NAME.zip.
When I run the script I get errors like the following:
unzip: cannot find or open /path/to/file/*.ZIP
unzip: cannot find or open /path/to/file//*.ZIP.zip
unzip: cannot find or open /path/to/file//*.ZIP.ZIP. No zipfiles found.
mkdir: cannot create directory 'data': File exists data
unzip: cannot find or open data, data.zip or data.ZIP.
Original answer
Supposing that foo.zip contains a folder foo, you could simply run
#!/bin/bash
unzip \*.zip \*
And then run it as bash auto-unzip.sh.
If you want to have these files extracted into a different folder, then I would modify the above as
#!/bin/bash
cp *.zip /home/user
cd /home/user
unzip \*.zip \*
rm *.zip
This, of course, you would run from the folder where all the zip files are stored.
Another answer
Another "simple" fix is to get dtrx (also available in the Ubuntu repos, possibly for other distros). This will extract each of your *.zip files into its own folder. So if you want the data in a different folder, I'd follow the second example and change it thusly:
#!/bin/bash
cp *.zip /home/user
cd /home/user
dtrx *.zip
rm *.zip
I would try the following.
for i in *.[Zz][Ii][Pp]; do
DIRECTORY=$(basename "$i" .zip)
DIRECTORY=$(basename "$DIRECTORY" .ZIP)
unzip "$i" -d "$DIRECTORY"
done
As noted, the basename program removes the indicated suffix .zip from the filename provided.
I have edited it to be case-insensitive. Both .zip and .ZIP will be recognized.
for zfile in $(find . -maxdepth 1 -type f -name "*.zip")
do
fn=$(echo ${zfile:2:4}) # this will give you the filename without .zip extension
mkdir -p "$fn"
unzip "$zfile" -d "$fn"
done
If the folder has only file file with the extension .zip, you can extract the name without an extension with the basename tool:
BASE=$(basename *.zip .zip)
This will produce an error message if there is more than one file matching *.zip.
Just to be clear about the issue here, the assumption is that the zip file does not contain a folder structure. If it did, there would be no problem; you could simply extract it into the subfolders with unzip. The following is only needed if your zipfile contains loose files, and you want to extract them into a subfolder.
With that caveat, the following should work:
#!/bin/bash
DIR=${1:-.}
BASE=$(basename "$DIR/"*.zip .zip 2>/dev/null) ||
{ echo More than one zipfile >> /dev/stderr; exit 1; }
if [[ $BASE = "*" ]]; then
echo No zipfile found >> /dev/stderr
exit 1
fi
mkdir -p "$DIR/$BASE" ||
{ echo Could not create $DIR/$BASE >> /dev/stderr; exit 1; }
unzip "$DIR/$BASE.zip" -d "$DIR/$BASE"
Put it in a file (anywhere), call it something like unzipper.sh, and chmod a+x it. Then you can call it like this:
/path/to/unzipper.sh /path/to/data_directory
simple one liner I use all the time
$ for file in `ls *.zip`; do unzip $file -d `echo $file | cut -d . -f 1`; done

Bash script to backup files to remote FTP. Deleting old files

I'm writing a bash script to send backups to a remote ftp server. The backup files are generated with a WordPress plugin so half the work is done for me from the start.
The script does several things.
It looks in the local backup dir for any files older than x and deletes them
It connects to FTP and puts the backup files in a dir with the current date as a name
It deletes any backup dirs for backups older than x
As I am not fluent in bash, this is a mishmash of a bunch of scripts I found around the net.
Here is my script:
#! /bin/bash
BACKDIR=/var/www/wp-content/backups
#----------------------FTP Settings--------------------#
FTP=Y
FTPHOST="host"
FTPUSER="user"
FTPPASS="pass"
FTPDIR="/backups"
LFTP=$(which lftp) # Path to binary
#-------------------Deletion Settings-------------------#
DELETE=Y
DAYS=3 # how many days of backups do you want to keep?
TODAY=$(date --iso) # Today's date like YYYY-MM-DD
RMDATE=$(date --iso -d $DAYS' days ago') # TODAY minus X days - too old files
#----------------------End of Settings------------------#
if [ -e $BACKDIR ]
then
if [ $DELETE = "Y" ]
then
find $BACKDIR -iname '*.zip' -type f -mtime +$DAYS -delete
echo "Old files deleted."
fi
if [ $FTP = "Y" ]
then
echo "Initiating FTP connection..."
cd $BACKDIR
$LFTP << EOF
open ${FTPUSER}:${FTPPASS}#${FTPHOST}
mkdir $FTPDIR
cd $FTPDIR
mkdir ${TODAY}
cd ${TODAY}
mput *.zip
cd ..
rm -rf ${RMDATE}
bye
EOF
echo "Done putting files to FTP."
fi
else
echo "No Backup directory."
exit
fi
There are 2 specific things I can't get done:
The find command doesn't delete any of the old files in the local backup dir.
I would like mput to only put the .zip files that were created today.
Thanks in advance for the help.
To send only zip files that were created today:
MPUT_ZIPS="$(find $BACKDIR -iname '*.zip' -type f -maxdepth 1 -mtime 1 | sed -e 's/^/mput /')"
[...]
$LFTP << EOF
open ${FTPUSER}:${FTPPASS}#${FTPHOST}
mkdir $FTPDIR
cd $FTPDIR
mkdir ${TODAY}
cd ${TODAY}
${MPUT_ZIPS}
cd ..
rm -rf ${RMDATE}
bye
EOF
Hope this helps =)
2) If you put todays backup files in a separate directory or link them to a separate directory, you can cd today and just transfer these files.

Bash: Maintaining a set of files and their gzipped equivalent

I have a directory tree in which there are some files and some subdirectories.
/
/file1.txt
/file2.png
/dir1
/subfile1.gif
The objective is to have a script that generates a gzipped version of each file and saves it next to each file, with an added .gz suffix:
/
/file1.txt
/file1.txt.gz
/file2.png
/file2.png.gz
/dir1
/subfile1.gif
/subfile1.gif.gz
This would handle the creation of new .gz files.
Another part is deletion: Whenever a non-gzipped file is created, the script would need to delete the orphaned .gz version when it runs.
The last and trickiest part is modification: Whenever some (non-gzipped) files are changed, re-running the script would update the .gz version of only those changed files, based on file timestamp (mtime) comparison between a file and its gzipped version.
Is it possible to implement such a script in bash?
Edit: The goal of this is to have prepared compressed copies of each file for nginx to serve using the gzip_static module. It is not meant to be a background service which automatically compresses things as soon as anything changes, because nginx's gzip_static module is smart enough to serve content from the uncompressed version if no compressed version exists, or if the uncompressed version's timestamp is more recent than the gzipped version's timestamp. As such, this is a script that would run occasionally, whenever the server is not busy.
Here is my attempt at it:
#!/bin/bash
# you need to clean up .gz files when you remove things
find . -type f -perm -o=r -not -iname \*.gz | \
while read -r x
do
if [ "$x" -nt "$x.gz" ]; then
gzip -cn9 "$x" > "$x.gz"
chown --reference="$x" "$x.gz"
chmod --reference="$x" "$x.gz"
touch --reference="$x" "$x.gz"
if [ `stat -c %s "$x.gz"` -ge `stat -c %s "$x"` ]; then
rm "$x.gz"
fi
fi
done
Stole most of it from here: https://superuser.com/questions/482787/gzip-all-files-without-deleting-them
Changes include:
skipping .gz files
adding -9 and -n to make the files smaller
deleting files that ended up larger (unfortunately this means they will be retried every time you run the script.)
made sure the owner, permissions, and timestamp on the compressed file matches the original
only works on files that are readable by everyone
Something like this, maybe?
#!/bin/sh
case $1 in
*.gz )
# If it's an orphan, remove it
test -f "${1%.gz}" || rm "$1" ;;
# Otherwise, will be handled when the existing parent is handled
* )
make -f - <<'____HERE' "$1.gz"
%.gz: %
# Make sure you have literal tab here!
gzip -9 <$< >$#
____HERE
;;
esac
If you have a Makefile already, by all means use a literal file rather than a here document.
Integrating with find left as an exercise. You might want to accept multiple target files and loop over them, if you want to save processes.

Unzip ZIP file and extract unknown folder name's content

My users will be zipping up files which will look like this:
TEMPLATE1.ZIP
|--------- UnknownName
|------- index.html
|------- images
|------- image1.jpg
I want to extract this zip file as follows:
/mysite/user_uploaded_templates/myrandomname/index.html
/mysite/user_uploaded_templates/myrandomname/images/image1.jpg
My trouble is with UnknownName - I do not know what it is beforehand and extracting everything to the "base" level breaks all the relative paths in index.html
How can I extract from this ZIP file the contents of UnknownName?
Is there anything better than:
1. Extract everything
2. Detect which "new subdidrectory" got created
3. mv newsubdir/* .
4. rmdir newsubdir/
If there is more than one subdirectory at UnknownName level, I can reject that user's zip file.
I think your approach is a good one. Step 2 could be improved my extracting to a newly created directory (later deleted) so that "detection" is trivial.
# Bash (minimally tested)
tempdest=$(mktemp -d)
unzip -d "$tempdest" TEMPLATE1.ZIP
dir=("$tempdest"/*)
if (( ${#dir[#]} == 1 )) && [[ -d $dir ]]
# in Bash, etc., scalar $var is the same as ${var[0]}
mv "$dir"/* /mysite/user_uploaded_templates/myrandomname
else
echo "rejected"
fi
rm -rf "$tempdest"
The other option I can see other than the one you suggested is to use the unzip -j flag which will dump all paths and put all files into the current directory. If you know for certain that each of your TEMPLATE1.ZIP files includes an index.html and *.jpg files then you can just do something like:
destdir=/mysite/user_uploaded_templates/myrandomname
unzip -j -d "$destdir"
mkdir "${destdir}/images"
mv "${destdir}/*.jpg" "${destdir}/images"
It's not exactly the cleanest solution but at least you don't have to do any parsing like you do in your example. I can't seem to find any option similar to patch -p# that lets you specify the path level.
Each zip and unzip command differs, but there's usually a way to list the file contents. From there, you can parse the output to determine the unknown directory name.
On Windows, the 1996 Wales/Gaily/van der Linden/Rommel version it is unzip -l.
Of course, you could just simply allow the unzip to unzip the files to whatever directory it wants, then use mv to rename the directory to what you want it as.
$tempDir = temp.$$
mv $zipFile temp.$$
cd $tempDir
unzip $zipFile
$unknownDir = * #Should be the only directory here
mv $unknownDir $whereItShouldBe
cd ..
rm -rf $tempDir
It's always a good idea to create a temporary directory for these types of operations in case you end up running two instances of this command.

Resources