scp and remote mkdir -p - bash

hi i have some file path like
/ifshk5/BC_IP/PROJECT/T1
1073/T11073_RICljiR/split/AG19_235/120225_I872_FCC0HN2ACXX_L8_RICljiRSYHSD2-1-IP
AAPEK-17_1.fq.gz
i need copy files from one ftp server to other. and also need to create directory if it not exist in server.
i login the sever which contains those file then run this code
#! /bin/bash
while read myline
do
for i in $myline
do
if [ -f $i ]
then
location=$(echo "$i" | awk -F "/" '{ print "", $6, $7, $8 }' OFS="/")
#location shows /T11073_RICekkR/Fq/AS59_59304
location="/opt/CLiMB/Storage3/ftp/ftp_climb/100033"$location
echo $location
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
fi
done
done < /ifshk5/BC_IP/PROJECT/T11073/T11073_all_3254.fq.list
it has some problem, 1. it can't work always shows permission denied, please try again.
but when i direct type
ssh tam#192.168.174.43 mkdir -p /sample/xxxx
it can work, and the new dir location is right it shows like
/opt/CLiMB/Storage3/ftp/ftp_climb/100033/T11073_RICekkR/Fq/AS59_59304

I don't see where the "permission denied" error might come from; run the script with bash -x to see the command which causes the error. Maybe it's not what you expect.
Also try rsync instead of inventing the wheel again:
rsync --dirs $i tam#192.168.171.34:$b
--dirs will create the necessary folders on the remote side (and it will give you good error messages when something fails).
It might even be possible to do everything with a single call to rsync if you have the same folder structure on both sides:
rsync -avP /ifshk5/BC_IP/PROJECT/T11073/ tam#192.168.171.34:/opt/CLiMB/Storage3/ftp/ftp_climb/100033/
Note the / after the paths! Don't omit them.
rsync will figure out which files need to be transferred and copy only those. If you want to transfer only a subset, use --include-from

Related

Bash: Check if remote directory exists using FTP

I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}

How to copy only file permissions and user:group from one machine and apply them on another machine in linux?

I have a list of files (absolute path of files/folders) stored in a text file. I need to copy only the permissions, user:group attributes (of all those files) from one machine and apply the same settings on the same set of files on another machine.
One way I can think of is to do it manually one by one, by checking the attributes on one machine and doing chmod/chown on another machine file by file but that seems to be a tedious task.
Any idea of how to automate this ?
edit: Just wanted to make clear that I don't need the data of these files from the source machine because the data is different in source machine. Target machine contains the updated data, only thing that I need now from source machine is file/folder permission and user:group.
How about this?
#!/bin/bash
user="user"
host="remote_host"
while read file
do
permission=$(stat -c %a $file) # retrieve permission
owner=$(stat -c %U $file) # retrieve owner
group=$(stat -c %G $file) # retrieve group
# just for debugging
echo "$file#local: p = $permission, o = $owner, g = $group"
# copy the permission
ssh $user#$host "chmod $permission $file" < /dev/null
# copy both owner and group
ssh $user#$host "chown $owner:$group $file" < /dev/null
done < list.txt
I am assuming that the list of the files is saved in "list.txt".
Moreover you should set the variables "user" and "host" accordingly to your setup.
I would suggest to configure ssh to have "automatic login". Otherwise you should insert
the password twice per loop. Here a good tutorial to do this SSH login without password.
Another solution that establishes just one ssh connection and uses the recursive option
for the directories (as asked in the comments) is the following:
#!/bin/bash
user="user"
host="remote_host"
cat list.txt | xargs stat -c "%n %a %U:%G" | ssh $user#$host '
while read file chmod_par chown_par
do
# $file contains %n
# $chmod_par contains %a
# $chown_par contains %U:%G
if [ -d $file ]; then
chmod -R $chmod_par $file
chown -R $chown_par $file
else
chmod $chmod_par $file
chown $chown_par $file
fi
done'

shell bash get file path and put into sub-directory

i have a long string in each line, one line like,
1000 AS34_59329 RICwdsRSYHSD11-2-IPAAPEK-93 /ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz /ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_2.fq.gz /ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/clean_111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz.total.info 11.824 0.981393 43.8283 95.7401 OK
this line contains three file locations(bold parts), i need to scp those files to another location like /sample . and also create sub-directory to put files, like this line files put into AS34_59329. so need create /sample/AS34_59329
Maybe many lines' sub-directory name is the same, so it need to judge whether the sub-directory has already create.
how to auto create the sub-directory?
#! /bin/bash
while read myline
do
for i in $myline
do
if [ -f $i]; then
scp -r $i xxxx#192.168.174.33:/sample
fi
done
done < data.list
It looks like you have ssh keys, so if you ssh then remote commands will work for you
if [ -f $i]; then
ssh xxxx#192.168.174.33 '[ -d /sample ] && echo "OK" || mkdir /sample'
scp -r $i xxxx#192.168.174.33:/sample
fi
This will only work if you have privilege on the remote box to create /sample.

Pass url to a bash script for use in scp

I'm writing a cron to backup some stuffs on a server.
Basically I'm sending specific files form a local directory using scp.
I'm using a public key to avoid authentication.
For reusability I'm passing the local directory and the server url by arguments to my bash script.
How I set my parameters:
#!/bin/bash
DIR="$1"
URL="$2"
FILES="$DIR*.ext"
My problem is about formatting the url.
Without formatting
How I send files to the server:
#!/bin/bash
for F in $FILEs
do
scp $F $URL;
if ssh $URL stat $(basename "$F")
then
rm $F
else
echo "Fails to copy $F to $URL"
fi
done
If I try to copy at user's home on the server I do:
$ ~/backup /path/to/local/folder/ user#server.com:
If I try to copy at a specific directory on the server I do:
$ ~/backup /path/to/local/folder/ user#server.com:/path/to/remote/folder/
In all cases it gives me the well known error (and my custom echo):
ssh: Could not resolve hostname user#server.com: nodename nor [...]
Can't upload /path/to/local/folder/file.ext to user#server.com
And it works anyway (the file is copied). But that's not a solution, cause as scp fails (seems to), the file is never deleted.
With formatting
I tried sending files using this method:
#!/bin/bash
for F in $FILES
do
scp $F "$URL:"
done
I no longer get an error, and it works for copying at user's home directory then deleting the local file:
$ ~/backup /path/to/local/folder/ user#server.com
But, of course, sending to a specific directory don't work at all.
Finally
So I think that my first method is more appropriate, but how can I get rid of that error?
Your mistake is that you can scp to user#server.com: but not ssh to it : you need to remove the trailing : character (and possible path after it). You can do it easily like this with bash parameter expansion :
ssh "${URL%:*}" stat "$(basename "$F")"
RECOMMENDATIONS
"USE MORE QUOTES!" They are vital. Also, learn the difference between ' and " and `. See http://mywiki.wooledge.org/Quotes and http://wiki.bash-hackers.org/syntax/words
if you have spaces in filenames, your code will breaks things up. Better use while IFS= read -r line; do #stuff with $line; done < file.txt
See bash parameter expansion

bash: check if remote file exists using scp

I am writing a bash script to copy a file from a remote server, to my local machine. I need to check to see if the file is available, so I can take an alternative action if it is not there.
I know how to test if a local file exists, however, using scp complicates things a bit. Common sense tells me that one way would be to try to scp the file anyhow, and check the return code from the scp command. Is this the correct way to go about it?
If yes, how do I test the return code from the scp invocation?
using ssh + some shell code embedded in the cmd line; use this method when you need to take a decision before the file transfer will fail;
ssh remote-host 'sh -c "if [ -f ~/myfile ] ; then gzip -c ~/myfile ; fi" ' | gzip -dc > /tmp/pkparse.py
if you want to transfer directories you may want to "tar"-it first
if you want to use scp you can check the return code like this:
if scp remote-host:~/myfile ./ >&/dev/null ; then echo "transfer OK" ; else echo "transfer failed" ; fi
it really depends on when its important for you to know if the file is there or not; before the transfer starts (use ssh+sh) or after its finished.
well, since you can use scp you can try using ssh to list and see if the file is their or not before proceeding.

Resources