I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD
Related
Hi i created following script to initialize my storage box to use rsync without password later. Last year it works if i remember correct...
cat .ssh/id_rsa.pub >> .ssh/storagebox_authorized_keys
echo -e "mkdir .ssh \n chmod 700 .ssh \n put $.ssh/storagebox_authorized_keys" \
".ssh/authorized_keys \n chmod 600 .ssh/authorized_keys" | sshpass -p ${storage_password} \
sftp -P ${storage_port} -i .ssh/id_rsa ${storage_user}#${storage_address}
today I get following error:
sshpass: invalid option -- 'i'
but the parameter -i belongs to sftp and not sshpass - is there an possibility to parse the parameters in the correct way?
edit: i switched the position of
-i .ssh/id_rsa ${storage_user}#${storage_address}
and get this error
sshpass: Failed to run command: No such file or directory
edit: it seems like an sftp problem...
after discussion, updating answer to properly support automation
step 1:
create an sftp "batch file" e.g: ~/.ssh/storage-box_setup.sftp
mkdir .ssh
chmod 700 .ssh
put /path/to/authorized_keys_file ".ssh/authorized_keys
chmod 600 .ssh/authorized_keys
/path/to/authorized_keys_file is a file containing public keys of ONLY the keys that should have access to your storage box (.ssh/storagebox_authorized_keys)
step 2:
update automation script command to
sshpass -p <password> -- sftp -P <port> -b ~/.ssh/storage-box_setup.sftp user#host
the -b flag was the answer you needed.
refer: man sftp
-b batchfile
Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction it should be used in conjunction with non-interactive authentication.
--
sshpass -p ${storage_password} -- \
sftp -P ${storage_port} -i .ssh/id_rsa \
${storage_user}#${storage_address}
the -- before sftp is a way to tell sshpass (and most other programs) to stop parsing arguments.
everything after -- is assumed as the last argument, which in the case of sshpass is the command to be executed ssh -i ~/.id_rsa ...
in case you're wondering switching the position of -i tells sshpass to execute -i as a program and hence fails with command not found
I'm trying to delete the contents of a remote directory in a bash script and leaving the folder intact by using ssh like this:
# First attempt
inboxResult=$(ssh -t -t username#host sudo -u rootUser rm -Rf /my/path/here/inbox/*)
# Second attempt
inboxResult=`ssh -t -t username#host sudo -u rootUser rm -Rf /my/path/here/inbox/*`
but it keeps failing silently. I've done my research and it seems like the '*' is being expanded before the command is sent via ssh to the remote host, but I would want the opposite. I couldn't find any solution and I've tried more than these two but they seem to be far from what I was looking.
This question already has answers here:
Multiple commands on remote machine using shell script
(3 answers)
Closed 6 years ago.
I've only got a little question for you.
I have made a little shell script that allows me to connect to a server and gather certain files and compress them to another location on another server, which works fine.
It is something in the vane of:
#!/bin/bash
ssh -T user#server1
mkdir /tmp/logs
cd /tmp/logs
tar -czvf ./log1.tgz /somefolder/docs
tar -czvf ./log2.tgz /somefolder/images
tar -czvf ./log3.tgz /somefolder/video
cd ..
-czvf logs_all.tgz /tmp/logs
What I would really like to do is:
Login with the root password when connect via ssh
Run the commands
Logout
Login to next server
Repeat until all logs have been compiled.
Also, it is not essential but, if I can display the progress (as a bar perhaps) then that would be cool!!
If anyone can help that would be awesome.
I am in between n00b and novice so please be gentle with me!!
ssh can take a command as argument to run on the remote machine:
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder"
This will perform the tar command on the remote machine, writing the tar's output to stdout which is passed to the local machine by the ssh command. So you can write it locally somewhere (there's no need for that /tmp/logs/ on the remote machine):
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder" > /path/on/local/machine/log1.tgz
If you just want to collect them on the remove server (no wish to transfer them to the local machine), just do the straight forward version:
ssh -T user#server1 "mkdir /tmp/logs/"
ssh -T user#server1 "tar -cvzf /tmp/logs/log1.tgz /somefolder/anotherfolder"
ssh -T user#server1 "tar -cvzf /tmp/logs/log2.tgz /somefolder/anotherfolder"
…
ssh -T user#server1 "tar -czvf /tmp/logs_all.tgz /tmp/logs"
You could send a tar command that writes a compressed archive to standard out and save it locally:
ssh user#server1 'tar -C /somefolder -czvf - anotherfolder' > server1.tgz
ssh user#server2 'tar -C /somefolder -czvf - anotherfolder' > server2.tgz
...
Assume there are some folders with these structures
/bench1/1cpu/p_0/image/
/bench1/1cpu/p_0/fl_1/
/bench1/1cpu/p_0/fl_1/
/bench1/1cpu/p_0/fl_1/
/bench1/1cpu/p_0/fl_1/
/bench1/1cpu/p_1/image/
/bench1/1cpu/p_1/fl_1/
/bench1/1cpu/p_1/fl_1/
/bench1/1cpu/p_1/fl_1/
/bench1/1cpu/p_1/fl_1/
/bench1/2cpu/p_0/image/
/bench1/2cpu/p_0/fl_1/
/bench1/2cpu/p_0/fl_1/
/bench1/2cpu/p_0/fl_1/
/bench1/2cpu/p_0/fl_1/
/bench1/2cpu/p_1/image/
/bench1/2cpu/p_1/fl_1/
/bench1/2cpu/p_1/fl_1/
/bench1/2cpu/p_1/fl_1/
/bench1/2cpu/p_1/fl_1/
....
What I want to do is to scp the following folders
/bench1/1cpu/p_0/image/
/bench1/1cpu/p_1/image/
/bench1/2cpu/p_0/image/
/bench1/2cpu/p_1/image/
As you can see I want to recursively use scp but excluding all folders that name "fl_X". It seems that scp has not such option.
UPDATE
scp has not such feature. Instead I use the following command
rsync -av --exclude 'fl_*' user#server:/my/dir
But it doesn't work. It only transfers the list of folders!! something like ls -R
Although scp supports recursive directory copying with the -r option, it does not support filtering of the files. There are several ways to accomplish your task, but I would probably rely on find, xargs, tar, and ssh instead of scp.
find . -type d -wholename '*bench*/image' \
| xargs tar cf - \
| ssh user#remote tar xf - -C /my/dir
The rsync solution can be made to work, but you are missing some arguments. rsync also needs the r switch to recurse into subdirectories. Also, if you want the same security of scp, you need to do the transfer under ssh. Something like:
rsync -avr -e "ssh -l user" --exclude 'fl_*' ./bench* remote:/my/dir
You can specify GLOBIGNORE and use the pattern *
GLOBIGNORE='ignore1:ignore2' scp -r source/* remoteurl:remoteDir
You may wish to have general rules which you combine or override by using export GLOBIGNORE, but for ad-hoc usage simply the above will do. The : character is used as delimiter for multiple values.
Assuming the simplest option (installing rsync on the remote host) isn't feasible, you can use sshfs to mount the remote locally, and rsync from the mount directory. That way you can use all the options rsync offers, for example --exclude.
Something like this should do:
sshfs user#server: sshfsdir
rsync --recursive --exclude=whatever sshfsdir/path/on/server /where/to/store
Note that the effectiveness of rsync (only transferring changes, not everything) doesn't apply here. This is because for that to work, rsync must read every file's contents to see what has changed. However, as rsync runs only on one host, the whole file must be transferred there (by sshfs). Excluded files should not be transferred, however.
If you use a pem file to authenticate u can use the following command (which will exclude files with something extension):
rsync -Lavz -e "ssh -i <full-path-to-pem> -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --exclude "*.something" --progress <path inside local host> <user>#<host>:<path inside remote host>
The -L means follow links (copy files not links).
Use full path to your pem file and not relative.
Using sshfs is not recommended since it works slowly. Also, the combination of find and scp that was presented above is also a bad idea since it will open a ssh session per file which is too expensive.
You can use extended globbing as in the example below:
#Enable extglob
shopt -s extglob
cp -rv !(./excludeme/*.jpg) /var/destination
This one works fine for me as the directories structure is not important for me.
scp -r USER#HOSTNAME:~/bench1/?cpu/p_?/image/ .
Assuming /bench1 is in the home directory of the current user. Also, change USER and HOSTNAME to the real values.
I have two line need to repeat doing in for loop
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
but each time it need to input password, how can i change code then just need input one time or more fast way
You can use public/private key generation method using ssh-keygen (https://help.ubuntu.com/community/SSH/OpenSSH/Keys)
And then use the below script.
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
done
Alternative solution :
You can use sshpass
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location sshpass -p '<password>' <command>
scp -r $i tam#192.168.174.43:$location sshpass -p '<password>' <command>
done
While public/private keys is the easiest option, without need to change the existing script, there is another option, of using sshfs. sshfs may not be installed by default.
With this approach, you basically mount the remote file system to a local directory, over ssh protocol. Then you can simply use commands like mkdir / cp etc.
NOTE: These command are from YOUR system & not from REMOTE system.
Mounting over ssh is a one time job, which will require your manual intervention. Do this before running the script.e.g. for your case:
mkdir /tmp/tam_192.168.174.43
sshfs tam#192.168.174.43:/ /tmp/tam_192.168.174.43
tam#192.168.174.43's password: <ENTER PASSWORD HERE>
& then, in your script, use simple commands like:
mkdir -p /tmp/tam_192.168.174.43/$location
cp -r $i /tmp/tam_192.168.174.43/$location
& to unmount:
fusermount -u /tmp/tam_192.168.174.43