How do I copy a folder from remote to local using scp? - shell

How do I copy a folder from remote to local host using scp?
I use ssh to log in to my server.
Then, I would like to copy the remote folder foo to local /home/user/Desktop.
How do I achieve this?

scp -r user#your.server.example.com:/path/to/foo /home/user/Desktop/
By not including the trailing '/' at the end of foo, you will copy the directory itself (including contents), rather than only the contents of the directory.
From man scp (See online manual)
-r Recursively copy entire directories

To use full power of scp you need to go through next steps:
Public key authorisation
Create SSH aliases
Then, for example if you have this ~/.ssh/config:
Host test
User testuser
HostName test-site.example
Port 22022
Host prod
User produser
HostName production-site.example
Port 22022
you'll save yourself from password entry and simplify scp syntax like this:
scp -r prod:/path/foo /home/user/Desktop # copy to local
scp -r prod:/path/foo test:/tmp # copy from remote prod to remote test
More over, you will be able to use remote path-completion:
scp test:/var/log/ # press tab twice
Display all 151 possibilities? (y or n)
For enabling remote bash-completion you need to have bash-shell on both <source> and <target> hosts, and properly working bash-completion. For more information see related questions:
How to enable autocompletion for remote paths when using scp?
SCP filename tab completion

To copy all from Local Location to Remote Location (Upload)
scp -r /path/from/local username#hostname:/path/to/remote
To copy all from Remote Location to Local Location (Download)
scp -r username#hostname:/path/from/remote /path/to/local
Custom Port where xxxx is custom port number
scp -r -P xxxx username#hostname:/path/from/remote /path/to/local
Copy on current directory from Remote to Local
scp -r username#hostname:/path/from/remote .
Help:
-r Recursively copy all directories and files
Always use full location from /, Get full location/path by pwd
scp will replace all existing files
hostname will be hostname or IP address
if custom port is needed (besides port 22) use -P PortNumber
. (dot) - it means current working directory, So download/copy from server and paste here only.
Note: Sometimes the custom port will not work due to the port not being allowed in the firewall, so make sure that custom port is allowed in the firewall for incoming and outgoing connection

What I always use is:
scp -r username#IP:/path/to/server/source/folder/ .
. (dot): it means current folder. so copy from server and paste here only.
IP: can be an IP address like 125.55.41.311 or it can be host like ns1.mysite.example.

Better to first compress catalog on remote server:
tar czfP backup.tar.gz /path/to/catalog
Secondly, download from remote:
scp user#your.server.example.com:/path/to/backup.tar.gz .
At the end, extract the files:
tar -xzvf backup.tar.gz

Typical scenario,
scp -r -P port username#ip:/path-to-folder .
explained with an sample,
scp -r -P 27000 abc#10.70.12.12:/tmp/hotel_dump .
where,
port = 27000
username = "abc" , remote server username
path-to-folder = tmp/hotel_dump
. = current local directory

And if you have one hell of a files to download from the remote location and if you don't much care about security, try changing the scp default encryption (Triple-DES) to something like 'blowfish'.
This will reduce file copying time drastically.
scp -c blowfish -r user#your.server.example.com:/path/to/foo /home/user/Desktop/

Go to Files on your unity toolbar
Press Ctrl + l and write here_goes_your_user_name#192.168.10.123
The 192.168.1.103 is the host that you want to connect.
The here one example

In case you run into "Too many authentication failures", specify the exact SSH key you have added to your severs ssh server:
scp -r -i /path/to/local/key user#remote.tld:/path/to/folder /your/local/target/dir

The question was how to copy a folder from remote to local with scp command.
$ scp -r userRemote#remoteIp:/path/remoteDir /path/localDir
But here is the better way for do it with sftp - SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer, and file management over any reliable data stream.(wikipedia).
$ sftp user_remote#remote_ip
sftp> cd /path/to/remoteDir
sftp> get -r remoteDir
Fetching /path/to/remoteDir to localDir 100% 398 0.4KB/s 00:00
For help about sftp command just type help or ?.

I don't know why but I was had to use local folder before source server directive . to make it work
scp -r . root#888.888.888.888:/usr/share/nginx/www/example.org/

For Windows OS, we used this command.
pscp -r -P 22 hostname#IP:/path/to/Downloads ./

The premise of the question is incorrect. The idea is, once logged into ssh, how to move files from the logged-in machine back to the client that is logged in. However, scp is not aware of nor can it use the ssh connection. It is making its own connections. So the simple solution is create a new terminal window on the local workstation, and run scp that transfers files from the remote server to local machine. E.g., scp -i key user#remote:/remote-dir/remote-file /local-dir/local-file

Related

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

Copy file with rsync or scp over multiple level or hops of SSH

I need to transfer around 4.2 GB of files from my local computer to a server B. However to ssh into server B, I need to ssh into server A.
Currently I'm copying files from my local computer to server A and then from server A to server B.
So the flow goes like this:
rsync -avz --del ~/Desktop/abc/ <my-user-name>#<server-A>:~/abc
rsync -avz --del ~/Desktop/abc/ <my-user-name>#<server-B>:~/abc
This is slow and copies 4.2 gb of data two times instead of one!
Can I transfer files with rsync from my local computer to directly server B ?
You can always use ssh with proxy command, which allows you to transfer files transparently. Using this config (~/.ssh/config):
Host <server-A>
User <user-A>
Host <server-B>
User <user-B>
ProxyCommand ssh <server-A> -W %h:%p
You can call your rsync:
rsync -avz --del ~/Desktop/abc/ <server-B>:~/abc
The data will be only "routed" over the middle host.
What you want is to use port-forwarding to forward the ssh/rsync port (generally port 22) from server B to alternate ports on server A so when you call rsync -e "ssh -p altport" serverA:/sourcedir /destdir, you are actually invoking rsync from serverB.
There are many good howtos available on StackExchange and other sites. For example:
How to forward a port from one machine to
another?
or
How To Forward Ports through a Linux Gateway with
Iptables
will get you started. Using port-forwarding, you are essentially using serverA as a pass-through host so you will only have to transfer your 4.2G once.
Yes, you can copy the files (and even folders) directly without making any intermediate copies on the contact/login server, which is by default the machine known to the outside world, or contacted to get access to a specific local network.
Below is a simple demonstration using scp without any unnecessary complications. On the local machine, simply do the following:
$ scp -r -o ProxyCommand="ssh -W %h:%p your_username#contact-server.de" your_username#machine_name:/file/path/on/this/machine ~/destination/path/to/save/the/copied/folder
-r option instructs scp to copy the contents of the entire folder.
your_username need not be the same on both machines.
If it is successful, you'll be asked for your passwords on both machines for authentication.
In the above command it is assumed that the typical way to access the machine named as "machine_name" would be via the contact server.
Note:
The above command also works for transferring data from a source remote machine (e.g. s) to a target remote machine (say t). In such a scenario, first ssh to the source remote machine (s) and navigate to the path where the data resides. After that you can simply think of/treat that remote machine as a local/source machine and then simply use the same scp command listed above for copying folders.
For copying individual files, just remove the -r option and provide the path to the specific file that you want to copy.

Using scp to copy a file to Amazon EC2 instance?

I am trying to use my Mac Terminal to scp a file from Downloads (phpMyAdmin I downloaded online) to my Amazon EC2 instance.
The command I used was:
scp -i myAmazonKey.pem phpMyAdmin-3.4.5-all-languages.tar.gz hk22#mec2-50-17-16-67.compute-1.amazonaws.com:~/.
The error I got:
Warning: Identity file myAmazonKey.pem not accessible: No such file or directory.
Permission denied (publickey).
lost connection
Both my myAmazonkey.pem and phpMyAdmin-3.4.5-all-languages.tar.gz are in Downloads, so then I tried
scp -i /Users/Hello_Kitty22/Downloads/myAmazonKey.pem /Users/Hello_Kitty22/Downloads/phpMyAdmin-3.4.5-all-languages.tar.gz hk22#mec2-50-17-16-67.compute-1.amazonaws.com:~/.
and the error I got:
Warning: Identity file /User/Hello_Kitty22/Downloads/myAmazonkey.pem not accessible: No such file or directory.
Permission denied (publickey).
lost connection
Can anyone please tell me how to fix my problem?
p.s. there is a similar post: scp (secure copy) to ec2 instance without password
but it doesn't answer my question.
Try specifying the user to be ec2-user, e.g.
scp -i myAmazonKey.pem phpMyAdmin-3.4.5-all-languages.tar.gz ec2-user#mec2-50-17-16-67.compute-1.amazonaws.com:~/.
See Connecting to Linux/UNIX Instances Using SSH.
second directory is your target destination, don't use server name there. In other words, you don't need to mention machine name for the machine you're currently in.
scp -i /path/to/your/.pemkey -r /copy/from/path user#server:/copy/to/path
-r if it's a directory.
Your key must not be publicly viewable for SSH to work. Use this command if needed:
chmod 400 yourPublicKeyFile.pem
You should be on you local machine to try the above scp command.
On your local machine try:
scp -i ~/Downloads/myAmazonKey.pem ~/Downloads/phpMyAdmin-3.4.5-all-languages.tar.gz hk22#mec2-50-17-16-67.compute-1.amazonaws.com:~/.
Here are the details of what works for an EC2 instance:
scp -i /path/to/whatever.pem /users/me/path-to-file ec2-user#ec2-55-55-555-555.compute-1.amazonaws.com:~
Few notes for beginning:
Note the spaces between the three parameters given after the -i
scp stands for secure copy protocol. Knowing the words makes it easier to remember the command.
-i dictates that you need to give the .pem file as the next param. If there is no -i, than you do not need a .pem.
Note the :~ at the end of the destination for the EC2 instance.
I had exactly same problem, my solution was to
scp -i /path/pem -r /path/file/ ec2-user#public aws dns name: (leave it blank here)
once you done this part, get into ssh server and mv file to desired location
This just worked for me. I used a combination of two other answers to this question.
scp -i /Users/me/documents/myKP.pem -r /Users/me/desktop/testDir \
ec2-user#ec2-11-111-11-11.compute-1.amazonaws.com:/home/ec2-user/remoteDir
The "ec2-user#ec2-11-111-11-11.compute-1.amazonaws.com" is copy-and-pasted from your ec2 instance's public DNS.
Send file from Local to Server:
scp -i .ssh/awsinstance.pem my_local_file
ubuntu#XX.XXX.XXX.XXX:/home/ubuntu
Download file from Server to Local:
scp -i .ssh/awsinstance.pem
ubuntu#XX.XXX.XXX.XXX:/home/ubuntu/server_file .
scp -i ~/path to pem file/file.pem -r(for directory) /PATH OF LOCAL/localfile user#hostname:PATH OF SERVER/serverdirectory
Below SCP format works for me
scp -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com:~/SampleFile.txt ~/SampleFile2.txt
SampleFile.txt: It will be the path from your root directory(In my case, /home/ubuntu). in my case the file which I wanted to download was at /var/www
SampleFile2.txt: It will be path of your machine's root path(In my case, /home/MyPCUserName)
So, I have to write below command
scp -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com:~/../../var/www/Filename.zip ~/Downloads
Public DNS
scp -i /path/my-key-pair.pem /path/my-file.txt ec2-user#my-instance-public-dns-name:path/
(IPv6)
scp -i /path/my-key-pair.pem /path/my-file.txt ec2-user#\[my-instance-IPv6-address\]:path/
SCP Commend
Send File from Local To Remote Server
sudo scp -i ../Downloads/new_bb_key.pem ./dump.zip ubuntu#13.127.124.129:~/.
Send File from Remote Server To Local
sudo scp -i ~/Downloads/new_bb_key.pem ubuntu#13.127.124.129:/home/ubuntu/LatestDBdump.zip Downloads/
try to use this command
if your instance is using ubuntu
scp -i myAmazonKey.pem phpMyAdmin-3.4.5-all-languages.tar.gz ec2-user#mec2-50-17-16-67.compute-1.amazonaws.com:~/.
you can get more info about your instance from here
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html
The process of using SCP to copy files from a local machine to an AWS EC2 Linux instance is covered step-by-step (including the points mentioned below) in this video.
To correct this particular issue with using SCP:
You need to specify the correct Linux user. From Amazon:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don't work, check with your AMI provider.
Your private key must not be publicly visible. Run the following command so that only the root user can read the file.
chmod 400 /path/to/yourKeyFile.pem
Check the permissions on the .pem file...openssh usually doesn't like world-readable private keys, and will fail (iir, scp doesn't do a great job of providing this feedback to the user).
Can you simply ssh with that key to your AWS host?
First you should change the mode of .pem file from read and write mode to read only mode. This can be done just by a single command in terminal sudo chmod 400 your_public_key.pem
I tried all the suggestions mentioned above and nothing worked. I terminated the current instance, launched another one and repeated the same exact process. This time no problems. Sometimes it might be the remote ami's fault.
I would use:
scp -i "path to .pem file" "file to be copeide from local machine" username#amazoninstance: 'destination folder to copy file on remote machine'

How to upload files and folders to AWS EC2 instance?

I use SSH to connect to my Ubuntu instance. With SSH I can administer files and folders on the instance, but how do I upload files and folders from my local machine to the instance?
Is it possible to do right from SSH session, without using SFTP clients?
Just to add a bit more detail to the scp command (included in OSx and most linux/unix):
scp -i myssh.pem local_file username#200.200.200.200:/home/username
Obviously - replace the pem file with the one used for ssh access. Obviously replace "username" and "200.200.200.." with valid values for your setup.
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
As mentioned already, I've used WinSCP, which logs me in as "ec2-user" - then make sure to adjust that user's permissions via SSH. Example:
chown -R ec2-user /path/to/files
(Authenticate as the root user first.)
Whatever folder or files you need to edit via WinSCP, allow permissions on them (otherwise you will get a permission denied error when trying to upload/edit files in WinSCP).
you cannot copy files using ssh. you can use scp/sftp.
scp if you are on linux or winscp if you are on windows
You can use this:
scp -i yourkeypair.pem source destination
This Works Fine
scp -r -i myssh.pem /local/directory remote_username#10.10.0.2:/remote/directory
-r for recursive
You could also install and set up an FTP Server, which will allow you to set up users, and directories for them to upload to. That being said, I've upvoted the above because scp/sftp is the ideal method.
The easiest way is to install webmin and user the file manager (java plugin) from your browser.
//Go to home folder
cd ~
//Download the latest version
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.660-1.noarch.rpm
//install
sudo rpm -U webmin-1.660-1.noarch.rpm
//Change default password of root user
passwd
Finally, open port 10000 in the security groups
Then, log into
https://server_name:10000
with user:root password:what_you_set_before

Moving a folder from Desktop to the server?

I have a folder in my Desktop. I want to copy it to my server in Terminal.
I tried this unsuccessfully
[~/bin]# cp -r /Users/Sam/Desktop/tig-0.14.1 ~/bin/
cp: cannot stat `/Users/Sam/Desktop/tig-0.14.1': No such file or directory
[edit]
I run the command in my server. The problem seems to be in the fact that "/Users/Sam/Desktop/tig-0.14.1" is a folder in my Mac, not in my server.
Perhaps, I cannot move the folder so simply to my server because my server do not know where my folder locates.
I have always moved the folder by GUI. Is the same possible also just in terminal?
From the server:
scp -r username#A.B.C.D:~/Desktop/tig-0.14.1/ ~/bin/
username is your shortname on your local mac.
A.B.C.D is the IP address of your local mac as seen by the server.
You will be prompted for your password.
Or if you wanted to push from your local client:
scp -r ~/Desktop/tig-0.14.1/ serveruser#W.X.Y.Z:~/bin/
serveruser is the user on the server whose ~/bin you want to copy into.
W.X.Y.Z is the IP address of the server as seen by your client.
You will be prompted to enter serveruser's password.
scp is part of ssh. See 'man scp' (from the terminal) for more info.
From your Mac (not the server):
# scp -r ~/Desktop/tig-0.14.1 myUsername#myServerName:~/bin
replace myUsername and myServerName appropriately.
cp is not the correct command. Try scp instead; it has similar use and you can use it like this: (see the manual for reference)
from linux client:
scp user1#host1://Users/Sam/Desktop/tig-0.14.1 ~/bin/
if you use a windows client you can use winscp to do this in "drag&drop" style
cp: cannot stat/Users/Sam/Desktop/tig-0.14.1': No such file or directory`
That's the problem, alright: the file you're trying to copy is not where you thought, or not named what you typed. As suggested in comments you can try using tab completion at the prompt to make sure you have everything correct:
# cp /Users/Sam/Desk<TAB>
# cp /Users/Sam/Desktop/tig<TAB>
# cp /Users/Sam/Desktop/tig-0.14.1.tar.gz
Note that tig-0.14.1.tar.gz is probably the actual file name, as found in the wild...

Resources