I've seen in posts like rsync - create all missing parent directories? :
rsync -aq --rsync-path='mkdir -p /tmp/imaginary/ && rsync' file user#remote:/tmp/imaginary/
I thought - great, let me try that:
$ rsync -aP --remove-source-files --rsync-path="mkdir -p /home/pi/ARCHIVE/2020/01/24 && rsync" a1.test a1.json a1.pdf /home/pi/ARCHIVE/2020/01/24/
sending incremental file list
rsync: mkdir "/home/pi/ARCHIVE/2020/01/24" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
Well, sure mkdir "/home/pi/ARCHIVE/2020/01/24" failed - but I did NOT issue mkdir, i issued mkdir -p!
So why did rsync ignore this? Is there any other setting I should set for it? Or maybe rsync-path can be used STRICTLY for ssh connections (which is not the case here)?
--rsync-path only applies to remote machines:
--rsync-path=PROGRAM specify the rsync to run on remote machine
This is because you don't need to invoke a second instance of rsync when you are doing a local copy. The copy will simply be done by the process you ran.
Since it's technically you who are invoking rsync on the target machine, it's you who should be adding mkdir .. && in front of rsync:
mkdir -p /home/pi/ARCHIVE/2020/01/24 &&
rsync -aP --remove-source-files a1.test a1.json a1.pdf /home/pi/ARCHIVE/2020/01/24/
Related
I'm trying to delete the contents of a remote directory in a bash script and leaving the folder intact by using ssh like this:
# First attempt
inboxResult=$(ssh -t -t username#host sudo -u rootUser rm -Rf /my/path/here/inbox/*)
# Second attempt
inboxResult=`ssh -t -t username#host sudo -u rootUser rm -Rf /my/path/here/inbox/*`
but it keeps failing silently. I've done my research and it seems like the '*' is being expanded before the command is sent via ssh to the remote host, but I would want the opposite. I couldn't find any solution and I've tried more than these two but they seem to be far from what I was looking.
I've written a shell script to scp, ssh, delete a directory, unzip, and remove the zip file
#!/bin/bash
tar -czf zipfile.tar.gz ./* .??*
scp zipfile.tar.gz root#some.ip.address:/var/www/html/wp-content/themes
rm zipfile.tar.gz
ssh root#some.ip.address << 'ENDSSH'
cd /some/directory
rm -rf zipfile
mkdir zipfile
tar xf zipfile.tar.gz -C zipfile
rm zipfile.tar.gz
ENDSSH
I am noticing that the files are successfully transferred and unzipped. The zip file is also successfully removed from the server.
However, I notice that I'm receiving these messages in the terminal
zipfile.tar.gz 100% 224KB ...
Pseudo-terminal will not be allocated because stdin is not a terminal.
...
Welcome to Ubuntu 18.04.3 LTS...
...
0 packages can be updated.
0 updates are security updates.
mesg: ttyname failed: Inappropriate ioctl for device
Running the script before the second block (ENDSSH) seems to not output those messages and executes successfully.
Is the ENDSSH causing the issue?
u can write like this:
ssh -tt root#some.ip.address << ENDSSH
your code
exit
ENDSSH
u try it.
This question already has answers here:
Multiple commands on remote machine using shell script
(3 answers)
Closed 6 years ago.
I've only got a little question for you.
I have made a little shell script that allows me to connect to a server and gather certain files and compress them to another location on another server, which works fine.
It is something in the vane of:
#!/bin/bash
ssh -T user#server1
mkdir /tmp/logs
cd /tmp/logs
tar -czvf ./log1.tgz /somefolder/docs
tar -czvf ./log2.tgz /somefolder/images
tar -czvf ./log3.tgz /somefolder/video
cd ..
-czvf logs_all.tgz /tmp/logs
What I would really like to do is:
Login with the root password when connect via ssh
Run the commands
Logout
Login to next server
Repeat until all logs have been compiled.
Also, it is not essential but, if I can display the progress (as a bar perhaps) then that would be cool!!
If anyone can help that would be awesome.
I am in between n00b and novice so please be gentle with me!!
ssh can take a command as argument to run on the remote machine:
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder"
This will perform the tar command on the remote machine, writing the tar's output to stdout which is passed to the local machine by the ssh command. So you can write it locally somewhere (there's no need for that /tmp/logs/ on the remote machine):
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder" > /path/on/local/machine/log1.tgz
If you just want to collect them on the remove server (no wish to transfer them to the local machine), just do the straight forward version:
ssh -T user#server1 "mkdir /tmp/logs/"
ssh -T user#server1 "tar -cvzf /tmp/logs/log1.tgz /somefolder/anotherfolder"
ssh -T user#server1 "tar -cvzf /tmp/logs/log2.tgz /somefolder/anotherfolder"
…
ssh -T user#server1 "tar -czvf /tmp/logs_all.tgz /tmp/logs"
You could send a tar command that writes a compressed archive to standard out and save it locally:
ssh user#server1 'tar -C /somefolder -czvf - anotherfolder' > server1.tgz
ssh user#server2 'tar -C /somefolder -czvf - anotherfolder' > server2.tgz
...
I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD
I have two line need to repeat doing in for loop
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
but each time it need to input password, how can i change code then just need input one time or more fast way
You can use public/private key generation method using ssh-keygen (https://help.ubuntu.com/community/SSH/OpenSSH/Keys)
And then use the below script.
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
done
Alternative solution :
You can use sshpass
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location sshpass -p '<password>' <command>
scp -r $i tam#192.168.174.43:$location sshpass -p '<password>' <command>
done
While public/private keys is the easiest option, without need to change the existing script, there is another option, of using sshfs. sshfs may not be installed by default.
With this approach, you basically mount the remote file system to a local directory, over ssh protocol. Then you can simply use commands like mkdir / cp etc.
NOTE: These command are from YOUR system & not from REMOTE system.
Mounting over ssh is a one time job, which will require your manual intervention. Do this before running the script.e.g. for your case:
mkdir /tmp/tam_192.168.174.43
sshfs tam#192.168.174.43:/ /tmp/tam_192.168.174.43
tam#192.168.174.43's password: <ENTER PASSWORD HERE>
& then, in your script, use simple commands like:
mkdir -p /tmp/tam_192.168.174.43/$location
cp -r $i /tmp/tam_192.168.174.43/$location
& to unmount:
fusermount -u /tmp/tam_192.168.174.43