This is my first post, and actually first steps into bash.
I wanted to create automated ftp server backup feature to make my life easier. Since I've felt good doing this script I've started working on improvements for it and currently I ran out of ideas.
I wanted to make script backup download all files from ftp server and defined credentials as variables (which I am proud of lol) but then I've realized that I can use same script for all my servers but I will have to store login credentials in separate file.
The problem is that I can't make my script to read this values for backup processing and they remain blank as I see from output.
If anyone have any ideas how can I make it read my credentials and server name from separate file I would be grateful!
Below you can find my files and scripts in order that they are being processed:
'start_backup.sh'
#!/bin/sh
# Load credentials
/home/user/scripts/credentials.sh
# Launch backup script
/home/user/scripts/my_ovh_backup.sh
credentials.sh
#!/bin/sh
my_ovh_server="(server address goes here)"
my_ovh_login="(login goes here)"
my_ovh_password="(password goes here)"
my_ovh_backup.sh
# Path that need to be copied
backup_files="/(path on server)"
# Where to backup
dest="/home/user/ftp_backup"
data=$(date +%y-%m-%d)
# Create folder with current date
mkdir $dest/in_progress_$data
# Copy data
cd $dest/in_progress_$data
wget -r -nc ftp://$my_ovh_login:$my_ovh_password#$my_ovh_server/$backup_files
# Rename folder after download completion
mv $dest/in_progress_$data $dest/$data
# Cleanup
rm -r $dest/$data
Output that I recieve:
./start_backup.sh
ftp://:#//(remote folder)/: Invalid host name.
adding: home/user/ftp_backup/22-01-02/ (stored 0%)
Related
I am not sure if this could be remedied by Programming means but I had a MailX shell script that was working properly in the test server. But when I run the script in the production server, the recipient only receives a file named 'eml' that can't be even opened. I was informed by the system administrator that the configurations of both servers are the same and I should be adjusting my code.
But I used the exact same shell script and it works in the test server.
cd /home/guava/autoemail
datediff=1
datetoday=$(date +%Y%m%d)
foldername=$(date --date="${datetoday} -${datediff} day" +%Y%m%d)
mv DEALS_ENTERED_TODAY_ALL_2OM_UP.xls DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.xls
zip -P $foldername DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.zip DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.xls
cat /home/guava/autoemail/email_body.txt | mailx -s "AML_20M_DAILY_TRANSACTION_REPORT_GUAVA_$foldername" -a /home/guava/autoemail/DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.zip ben#onionwank.com
rm DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.xls
rm DEALS_ENTERED_TODAY_ALL_2OM_UP_$foldername.zip
what it does:
-declares the date yesterday
-renames an excel file with the yesterday's date
-zip the excel file with a password
-email it to the user
-delete the used files.
I just want to ask if there is anything I can improve with my code so I can use it in the production server. Why does the server send an 'eml' file instead of the attachment I defined?
It is possible that this is a server issue but the system admins don't seem to know what to do.
I've created a bash script to migrate sites and databases from one server to another: Algorithm:
Parse .pgpass file to create individual dumps for all the specified Postgres db's.
Upload said dumps to another server via rsync.
Upload a bunch of folders related to each db to the other server, also via rsync.
Since databases and folders have the same name, the script can predict the location of the folders if it knows the db name. The problem I'm facing is that the loop is only executing once (only the first line of .pgpass is being completed).
This is my script, to be run in the source server:
#!/bin/bash
# Read each line of the input file, parse args separated by semicolon (:)
while IFS=: read host port db user pswd ; do
# Create the dump. No need to enter the password as we're using .pgpass
pg_dump -U $user -h $host -f "$db.sql" $db
# Create a dir in the destination server to copy the files into
ssh user#destination.server mkdir -p webapps/$db/static/media
# Copy the dump to the destination server
rsync -azhr $db.sql user#destination:/home/user
# Copy the website files and folders to the destination server
rsync -azhr --exclude "*.thumbnails*" webapps/$db/static/media/ user#destination.server:/home/user/webapps/$db/static/media
# At this point I expect the script to continue to the next line, but if exits at the first line
done < $1
This is .pgpass, the file to parse:
localhost:*:db_name1:db_user1:db_pass1
localhost:*:db_name3:db_user2:db_pass2
localhost:*:db_name3:db_user3:db_pass3
# Many more...
And this is how I'm calling it:
./my_script.sh .pgpass
At this point everything works. The first dump is created, and it is transferred to the destination server along with the related files and folders. The problem is the script finishes there, and won't parse the other lines of .pgpass. I've commented out all lines related to rsync (so the script only creates the dumps), and it works correctly, executing once for each line in the script. How can I get the script to not exit after executing rsync?
BTW, I'm using key based ssh auth to connect the servers, so the script is completely prompt-less.
Let's ask shellcheck:
$ shellcheck yourscript
In yourscript line 4:
while IFS=: read host port db user pswd ; do
^-- SC2095: ssh may swallow stdin, preventing this loop from working properly.
In yourscript line 8:
ssh user#destination.server mkdir -p webapps/$db/static/media
^-- SC2095: Add < /dev/null to prevent ssh from swallowing stdin.
And there you go.
I am new to FTP configuration. What I am trying to do is as follows:
I am running a shell script on my localhost and downloading some files to my machine. Now I want a functionality where the files which I downloaded should be stored in a temporary directory, and then it should be transferred to a location(other directory) which I specify. I feel this mechanism is achievable by FTP communication and will be helpful when I host this on a domain, but I am not getting resources from where I can teach myself how to set this up.
OK, having visited many sites, here are some resources you might find handy:
For configuring vsftpd, here's a manual of how to install, configure and use.
About receiving many files recursively via FTP, you can use wget (extracted from this site):
cd /tmp/ftptransfer
wget --mirror --username=foo --password=bar ftp://ftp.originsite.com/path/to/folder
About sending many files recursively, many people find the only way of doing so by tar-n-send; the only problem is that the files will remain tarred until you extract them by going to the other machine (remotely or via ssh) to extract the manually. There is an alternative, not using FTP, but using ssh and pipes which lets you have files extracted on target machine:
tar -cf - /tmp/ftptransfer | ssh geek#targetsite "cd target_dir; tar -xf -"
Explained:
tar is the application to make tar files
-c: create file
-f -: file name is "stdout"
/tmp/ftptransfer: include this folder and all subdirectories in the tar
|: Make a pipe to the next program (connect stdout to stdin)
ssh: Secure Shell program
geek#targetsite: username # machinename where you want to connect to
"..." command to send to the remote host
cd target_dir: changes the dir of output
tar -xf -: extracts the file received by "stdin"
For configuring SSH on Ubuntu, have a look here.
If you need more help, don't be afraid to ask! :)
This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.
Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1
I'm running CentOS 6.
I need to upload some files every hour to another server.
I have SSH access with password to the server. But ssh-keys etc. is not an option.
Can anyone help me out with a .sh script that uploads the files via scp and delete the original after a successful upload?
For this, I'd suggest to use rsync rather than scp, as it is far more powerful. Just put the following in an executable script. Here, I assume that all the files (and nothing more) is in the directory pointed to by local_dir/.
#!/bin/env bash
rsync -azrp --progress --password-file=path_to_file_with_password \
local_dir/ remote_user#remote_host:/absolute_path_to_remote_dir/
if [ $? -ne 0 ] ; then
echo "Something went wrong: don't delete local files."
else
rm -r local_dir/
fi
The options are as follows (for more info, see, e.g., http://ss64.com/bash/rsync.html):
-a, --archive Archive mode
-z, --compress Compress file data during the transfer
-r, --recursive recurse into directories
-p, --perms Preserve permissions
--progress Show progress during transfer
--password-file=FILE Get password from FILE
--delete-after Receiver deletes after transfer, not during
Edit: removed --delete-after, since that's not the OP's intent
Be careful when setting the permissions for the file containing the password. Ideally only you should have access tot he file.
As usual, I'd recommend to play a bit with rsync in order to get familiar with it. It is best to check the return value of rsync (using $?) before deleting the local files.
More information about rsync: http://linux.about.com/library/cmd/blcmdl1_rsync.htm