bash sh script for remote backups - bash

I have an issue on a server whereby on occassion automated backups from the server to a remote host fails.
Currently this leaves me with no recent backups and with a pile of .tar.gz files taking up a large amount of space on the server.
My current process for correcting this when it happens is to manually Putty in and command line FTP these files across individually. This is time consuming and tedious.
I want to write a .sh script I can upload to the folder and tell the server to put across each .tar.gz file in the folder. I can't transfer the folder as a whole but simply each file in it, as some files are already transported correctly, etc.
I found this question which
shows a script that worked for this question asker but I need to adjust parts of this script and I do not know (am not confident enough) with .sh instructions to do this, and also am wary of screwing up anything server side.
#!/bin/sh
USERNAME="user"
PASSWORD="pass"
SERVER="123.456.78.90"
DATE="`date +%Y-%m-%d`"
BACKUPDIR="/${DATE}/accounts/"
find . -type f -name "*.tar.gz" -exec basename {} .tar.gz \; |
while read filename ; do
/bin/ftp -inv $SERVER >> /tmp/ftp_backup.log <<EOF
user $USERNAME $PASSWORD
cd $BACKUPDIR
binary
put $filename
EOF
echo "$date copied $filename" >> /tmp/ftp_backup.log
done
My intention is to make this script that I can upload it in to the server folder in question and then run the script (after chmoding it) in the folder to move the .tar.gz files - one at a time - FTP'd across to the backup directory (/<date>/accounts/) and finishing once they're all moved.
(Then I would delete the server-side .tar.gz files and the .sh script above.)
There are ~60 files up to 15Gb in size. Filenames do not contain spaces.
Filepath structures:
Serve side:
/backupsfolder/2018-07-11/filename1.tar.gz
/backupsfolder/2018-07-11/filename2.tar.gz
/backupsfolder/2018-07-11/backupscript.sh //my script above
/backupsfolder/2018-07-11/master.meta //other files
FTP side:
/2018-07-11/accounts/filename1.tar.gz
What do I need to adjust on the above script to do this?

After some work I found a few issues to be careful of and fix:
1) In order to run, .sh files need to be "enabled" with chmod on the server.
chmod +x ./<filename>
2) Unix line endings; while using Notepad++ it claimed to have correct line endings saved, but the error was coming up on the server of:
/bin/sh^M: bad interpreter: No such file or directory
this was solved with:
sed -i 's/\r//' <filepath>/<filename>
from this answer.
3) The names of the files being pushed to FTP was wrong - it was not including the .tar.gz - I hadn't realised the -exec feature was cutting off the .tar.gz
This was fixed with
-exec basename {} .tar.gz
becomes
-exec basename {}
4) Log file output was not being set on new lines; instead being all on the same line.
This was fixed with reading this anwser and using -e on the echo statements and using the \n syntax.
echo -e "$date copied $filename\n"
Final fully working bash script for my needs:
1) Save the script to the server
2) Run sed -i 's/\r//' /<filepath>/<filename>
3) Run chmod +x ./<filename>
4) Run the file in bash.
5) View results in the tmp directory specified.
The script
This script takes .tar.gz files from the current directory and saves them to the remote FTP, cycling through each file in turn.
#!/bin/sh
USERNAME="user"
PASSWORD="pass"
SERVER="123.456.78.90"
DATE="`date +%Y-%m-%d`"
BACKUPDIR="/${DATE}/accounts/"
find . -type f -name "*.tar.gz" -exec basename {} \; |
while read filename ; do
ftp -inv $SERVER >> /tmp/My_ftp_backup.log <<EOF
user $USERNAME $PASSWORD
cd $BACKUPDIR
binary
put $filename
EOF
echo -e "$date copied $filename\n" >> /tmp/My_ftp_backup.log
done

Related

script read file contents and copy files

I wrote a script in bash that should read the contents of a text file, look for the corresponding files for each line, and copy them to another folder. It's not copying all the files, only two, the third and the last.
#!/bin/bash
filelist=~/Desktop/file.txt
sourcedir=~/ownCloud2
destdir=~/Desktop/file_out
while read line; do
find $sourcedir -name $line -exec cp '{}' $destdir/$line \;
echo find $sourcedir -name $line
sleep 1
done < "$filelist"
If I use this string on the command line it finds me and copies the file.
find ~/ownCloud2 -name 123456AA.pdf -exec cp '{}' ~/Desktop/file_out/123456AA.pdf \;
If I use the script instead it doesn't work.
I used your exact script and had no problems, for both bash or sh, so maybe you are using another shell in your shebang line.
Use find only when you need to find the file "somewhere" in multiple directories under the search start point.
If you know the exact directory in which the file is located, there is no need to use find. Just use the simple copy command.
Also, if you use "cp -v ..." instead of the "echo", you might see what the command is actually doing, from which you might spot what is wrong.

start a shellscript with cygwin from a batch file that is in a different directory than C:

I am trying to make my life easier and tried working with scripting.
Since the script I wanna use is a shellscript ( I couldn´t make it work in powershell had a problem in reading the xml file; so a colleague had coded a shell script that I am using now) I am trying to use it in a batch(cmd) file.
So here is the idea on how it should work:
we have a .sh script that is removing all the timestamps in the XML files in that directory that this .sh file is in.
Code:
#!/bin/bash
for i in `find . -name "*.xml"`
do
echo $i; sed -i '/UCOMPSTAMP/d' $i
done
for i in `find . -name "*.xml"`
do
echo $i; sed -i '/DAT name="UTIMESTAMP"/d' $i
done
for i in `find . -name "*.xml"`
do
echo $i; sed -i '/DAT name="U_INTF"/d' $i
done
for i in `find . -name "*.xml"`
do
echo $i; sed -i '/DAT name="U_SVCUSE"/d' $i
done
for i in `find . -name "*.xml"`
do
echo $i; sed -i '/DAT name="U_FSEQ"/d' $i
done
This script is working and deletes the timestamps in my xml file in this directory.
I have a directory U:\bla\bla\compare
I also export both xml files that I am going to compare in that directory.
Lets say XML_LIVE.xml and XML_TEST.xml
now I have a batch(called: "execute_sh.cmd") that tries to open the .sh file:
#echo off
C:\cygwin64\bin\bash -l /cygdrive/u/bla/compare/01_remove_timestamps.sh
pause
right now it doesnt do anything. Also doesn´t say that it can´t find the path. I tried using ./01_remove_timestamps.sh
U:\bla\Compare\01_remove_timestamps.sh
but I get the error that it couldn´t find the file then. If I try to execute the command in cygwin I have to change the directory with
cd /cygdrive/u/bla/compare/
and then
./remove_timestamps.sh
and this executes the shellscript so why is this not possible with the .cmd?
and my final .cmd (called: execute_all)
call %0\..\execute_sh.cmd"
has this code in so I just have to start this .cmd and everything is automatic.
This entire thing works if I put all these .cmd and .sh files and the xml files in my home directory in cygwin -> C:\cygwin64\home\myName
The code would be
#echo off
C:\cygwin64\bin\bash -l 01_remove_timestamps.sh
pause
But I wanna use the D: Drive and a specific directory there. I hope this is clear to understand my problem.

schedule running a bash shell script in windows

Apreciate any help and excuse me if my terminology is incorrect.
this is a script(*.sh file) that:
1-goes to a specific dir A
2-copies files from another dir B to dir A
3-#comented out# it also unzips the files in dir A and its subdirectories
4-#comented out# it also removes rows 1-6 and the last row of all *.csv files in dir A
#!/bin/bash
# Configure bash so the script will exit if a command fails.
set -e
#cd to the dir you want to copy to:
cd /cygdrive/c/path/I/want/to/copy/to
#echo Hello
#cp the files i want
#include the subdirectories
cp -r /cygdrive/c/path/I/want/to/copy/from/* .
# This will unzip all .zip files in all subdirectories under this one.
# -o is required to overwrite everything that is in there
#find -iname '*.zip' -execdir unzip -o {} \;
#find ./ -iname '*.csv' -exec sed -i '1,6d;$ d' '{}' ';'
Now I can get this script to work in cygwin by going to the dir where the file is stored and giving the following commands:
./filename.sh
or
/cygdrive/c/path/where/the/file/is/filename.sh
or
bash filename.sh
I can also do this in CMD/Windows DOS by doing the following:
C:\cygwin\bin\bash.exe -l
to get into a bash terminal and then give the following command:
/cygdrive/c/path/where/the/file/is/filename.sh
In task scheduler(in Windows) I have tried to schedule the following:
C:\cygwin\bin\bash.exe -l /cygdrive/c/path/where/the/file/is/filename.sh
but this does not work, even though the seperate commands work in CMD/Windows DOS as I have said above
Now what I want to do is be able to schedule this script(filename.sh) like I would a .vbs or .bat file in windows using task scheduler? Can anyone advise on this?
Note I have tried to write a Windows batch file(.bat) to do this(see below), but I could not get my unzip and sed commands to work,see here. So I have tried to write the Bash shell script above.
chdir C:\pointA
C:\cygwin\bin\cp.exe /cygdrive/v/pointB/* .
::find -iname *.zip -execdir unzip {} \;
::find ./ -iname '*.csv' -exec sed -i '1,6d;$ d' '{}' ';'
A solution is to associate .sh files with a batch file that runs bash. That way whenever you tell windows to execute an sh file it'll use the correct launcher - whether that's via a double click or a scheduled task. Here's mine:
#echo off
d:
chdir d:\cygwin\bin
bash --login %*
Associating a file type means that when you try to execute a file of that type, windows will use the path to that file as an argument passed to the program you've specified. For example, I have LibreOffice4 associated with .ods files. So if I doubleclick a .ods file, or just enter the path to a .ods file at the command prompt, windows will run open office calc, with the first parameter being the ods file. So if I have Untitled.ods on my desktop. I doubleclick it. That's effectively the same as opening up command prompt, typing
D:\Program Files (x86)\LibreOffice 4\program\scalc.exe" "C:\Users\Adam\Desktop\Untitled.ods".
and hitting enter. Indeed, if I do it, the expected happens: open office calc starts up and loads the file.
You can see how this works if you change the association to echo.exe (which I found in D:\cygwin\bin).
If I change the association to echo, open up the command prompt and type
"C:\Users\Adam\Desktop\Untitled.ods"
I'll just see echo.exe echo the filename back to me.
So what I'm suggesting you do is this:
create a batch file to run bash scripts using cygwin's bash (or use mine).
change the association for .sh files to that batch file
execute those .sh files directly, as though they were .exe or .bat files.
why not creating a batchfile (.bat) that loads your cygwin bashscript and schedule this batchfile? like this you dont have to deal with the way M$ handles paramters

Bash script of unzipping unknown name files

I have a folder that after an rsync will have a zip in it. I want to unzip it to its own folder(if the zip is L155.zip, to unzip its content to L155 folder). The problem is that I dont know it's name beforehand(although i know it will be "letter-number-number-number"), so I have to unzip an uknown file to its unknown folder and this to be done automatically.
The command “unzip *”(or unzip *.zip) works in terminal, but not in a script.
These are the commands that have worked through terminal one by one, but dont work in a script.
#!/bin/bash
unzip * #also tried .zip and /path/to/file/* when script is on different folder
i=$(ls | head -1)
y=${i:0:4}
mkdir $y
unzip * -d $y
First I unzip the file, then I read the name of the first extracted file through ls and save it in a variable.I take the first 4 chars and make a directory with it and then again unzip the files to that specific folder.
The whole procedure after first unzip is done, is because the files inside .zip, all start with a name that the zip already has, so if L155.ZIP is the zip, the files inside with be L155***.txt.
The zip file is at /path/to/file/NAME.zip.
When I run the script I get errors like the following:
unzip: cannot find or open /path/to/file/*.ZIP
unzip: cannot find or open /path/to/file//*.ZIP.zip
unzip: cannot find or open /path/to/file//*.ZIP.ZIP. No zipfiles found.
mkdir: cannot create directory 'data': File exists data
unzip: cannot find or open data, data.zip or data.ZIP.
Original answer
Supposing that foo.zip contains a folder foo, you could simply run
#!/bin/bash
unzip \*.zip \*
And then run it as bash auto-unzip.sh.
If you want to have these files extracted into a different folder, then I would modify the above as
#!/bin/bash
cp *.zip /home/user
cd /home/user
unzip \*.zip \*
rm *.zip
This, of course, you would run from the folder where all the zip files are stored.
Another answer
Another "simple" fix is to get dtrx (also available in the Ubuntu repos, possibly for other distros). This will extract each of your *.zip files into its own folder. So if you want the data in a different folder, I'd follow the second example and change it thusly:
#!/bin/bash
cp *.zip /home/user
cd /home/user
dtrx *.zip
rm *.zip
I would try the following.
for i in *.[Zz][Ii][Pp]; do
DIRECTORY=$(basename "$i" .zip)
DIRECTORY=$(basename "$DIRECTORY" .ZIP)
unzip "$i" -d "$DIRECTORY"
done
As noted, the basename program removes the indicated suffix .zip from the filename provided.
I have edited it to be case-insensitive. Both .zip and .ZIP will be recognized.
for zfile in $(find . -maxdepth 1 -type f -name "*.zip")
do
fn=$(echo ${zfile:2:4}) # this will give you the filename without .zip extension
mkdir -p "$fn"
unzip "$zfile" -d "$fn"
done
If the folder has only file file with the extension .zip, you can extract the name without an extension with the basename tool:
BASE=$(basename *.zip .zip)
This will produce an error message if there is more than one file matching *.zip.
Just to be clear about the issue here, the assumption is that the zip file does not contain a folder structure. If it did, there would be no problem; you could simply extract it into the subfolders with unzip. The following is only needed if your zipfile contains loose files, and you want to extract them into a subfolder.
With that caveat, the following should work:
#!/bin/bash
DIR=${1:-.}
BASE=$(basename "$DIR/"*.zip .zip 2>/dev/null) ||
{ echo More than one zipfile >> /dev/stderr; exit 1; }
if [[ $BASE = "*" ]]; then
echo No zipfile found >> /dev/stderr
exit 1
fi
mkdir -p "$DIR/$BASE" ||
{ echo Could not create $DIR/$BASE >> /dev/stderr; exit 1; }
unzip "$DIR/$BASE.zip" -d "$DIR/$BASE"
Put it in a file (anywhere), call it something like unzipper.sh, and chmod a+x it. Then you can call it like this:
/path/to/unzipper.sh /path/to/data_directory
simple one liner I use all the time
$ for file in `ls *.zip`; do unzip $file -d `echo $file | cut -d . -f 1`; done

Shell Script to update the contents of a folder - 2

I wrote this piece of code this morning.
The idea is, a text file (new.txt) has the details about the directory structure and the files in the directory.
Read new.txt, create the same directory structure at a destination directory (here it is /tmp), copy the source files to the corresponding destination directory.
Script
clear
DEST_DIR=/tmp
for file in 'cat new.txt'
do
mkdir -p $file
touch $file
echo 'ls -ltr $file'
cp -rf $file $DEST_DIR
find . -name $file -type f
cp $file $DEST_DIR
done
Contents of new.txt
Test/test1/test1.txt
Test/test2/test2.txt
Test/test3/test3.txt
Test/test4/test4.txt
The issue is, it executes the code, creates the directory structure, but instead of creating it at the end, it creates directories named test1.txt, test2.txt, etc. I have no idea why this is happening.
Another question: For Turbo C, C++, there is an option to check the execution flow? Is there something available in Unix, Perl and shell scripting to check the execution flow?
The script creates these directories because you tell it to on the line mkdir -p $file. You have to extract the directory path from you filename. The standard command for this is dirname:
dir=`dirname "$file"`
mkdir -p -- "$dir"
To check the execution flow is to add set -x at the top of your script. This will cause all lines that are executed to be printed to stderr with "+ " in front of it.
you might want to try something like rsync

Resources