I would like to make a scp command with a variable for the file destination but, in the variable I have a space.
~ $ target=C:/Users/exemple/a folder with space/data
~ $ scp -r -p file.txt $USER#$IP_TARGET:${target}
space/data: No such file or directory
How can I do ?
I succeeded with this :
target='"C:/Users/exemple/a folder with space/data"'
Or this :
target=\"C:/Users/exemple/a folder with space/data\"
and use
scp -r -p file.txt $USER#$IP_TARGET:"$target"
Related
So in the terminal I access the remote host through ssh -p then once I'm in i have to cd /directory1/directory2/. Then I want to find the latest directory which I do using ls -td -- */ | head -n 1 then using this I want to cd into that and tail -n 1 file1
All these commands work in the terminal but I want to automate it to where I can just type ./tailer.sh and have that be output.
Any ideas would be appreciated.
The shell script tailer.sh can look something like this
#!/bin/bash
ssh -p <PORT> <HOST_NAME> '( cd /directory1/directory2/ && LATEST_DIR=$(ls -td -- */ | head -n 1) && cd ${LATEST_DIR} && tail -n 1 file1 )'
Then give execute permissions to tailer.sh using chmod u+x tailer.sh
Run the script using ./tailer.sh
I am trying to create a shell script that will check for a new file then cp to a Docker Container. The code I have so far is...
#!/bin/sh
source="/var/www/html/"
dest="dev_ubuntu:/var/www/html/"
inotifywait -m "/var/www/html" -e create -e moved_to |
while read file; do
sudo docker cp /var/www/html/$file dev_ubuntu:/var/www/html
done
But this code gives the following error:
Setting up watches.
Watches established.
"docker cp" requires exactly 2 argument(s).
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
What am I doing wrong?
Do you have spaces in your file names? Use double quotes to avoid separating filenames by words:
echo $file
sudo docker cp "$file" dev_ubuntu:"$file"
I've also echoed the file name to see what is happening.
I sent a batch of files to a remote server via SFTP. If it were a local directory I could do something like this ls -l | wc -l to get the total number of files. However, with SFTP, I get an error Can't ls: "/|" not found.
echo ls -l | sftp server | grep -v '^sftp' | wc -l
If you want to count the files in a directory the directory path should be put after the ls -l command like
echo ls -l /my/directory/ | sftp server | grep -v '^sftp' | wc -l
Use a batch file to run commands remotely and get the data back to work with in bash:
Make your batch file called mybatch.txt with these sftp commands:
cd your_directory/your_sub_directory
ls -l
Save it out and give it 777 permissions.
chmod 777 mybatch.txt
Then run it like this:
sftp your_username#your_server.com < mybatch.txt
It will prompt you for the password, enter it.
Then you get the output dumped to bash terminal. So you can pipe that to wc -l like this:
sftp your_user#your_server.com < mybatch.txt | wc -l
Connecting to your_server.com...
your_user#your_server.com's password:
8842
The 8842 is the number of lines returned by ls -l in that directory.
Instead of piping it to wc, you could dump it to a file for parsing to determine how many files/folders.
I would use sftp batch file.
Create a file called batchfile and enter "ls -l" in it.
Then run
sftp -b batchfile user#sftpHost | wc -l
The easiest way I have found is to use the lftp client which supports a shell-like syntax to transfer the output of remote ftp commands to local processes.
For example using the pipe character:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l | wc -l'
This will make lftp spawn a local wc -l and give it the output of the remote ls -l ftp command on its stdin.
Shell redirection syntax is also supported and will write directly to local files:
lftp -c 'connect sftp://user_name:password#host_name/directory; ls -l >list.txt'
Thus a file named list.txt containing the remote file listing will be created in the current folder on the local machine. Use >> to append instead.
Works perfectly for me.
I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD
I have two line need to repeat doing in for loop
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
but each time it need to input password, how can i change code then just need input one time or more fast way
You can use public/private key generation method using ssh-keygen (https://help.ubuntu.com/community/SSH/OpenSSH/Keys)
And then use the below script.
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
done
Alternative solution :
You can use sshpass
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location sshpass -p '<password>' <command>
scp -r $i tam#192.168.174.43:$location sshpass -p '<password>' <command>
done
While public/private keys is the easiest option, without need to change the existing script, there is another option, of using sshfs. sshfs may not be installed by default.
With this approach, you basically mount the remote file system to a local directory, over ssh protocol. Then you can simply use commands like mkdir / cp etc.
NOTE: These command are from YOUR system & not from REMOTE system.
Mounting over ssh is a one time job, which will require your manual intervention. Do this before running the script.e.g. for your case:
mkdir /tmp/tam_192.168.174.43
sshfs tam#192.168.174.43:/ /tmp/tam_192.168.174.43
tam#192.168.174.43's password: <ENTER PASSWORD HERE>
& then, in your script, use simple commands like:
mkdir -p /tmp/tam_192.168.174.43/$location
cp -r $i /tmp/tam_192.168.174.43/$location
& to unmount:
fusermount -u /tmp/tam_192.168.174.43