Transfer a file to remote machine(ubuntu) while running bash remotely - bash

I have written a bash script which I should run on the remote server(ubuntu) with GUI(zenity) interface and I will issue below command on the local machine.
sshpass -p $PASS ssh root#$SERVER 'bash' < /tmp/dep.sh | tee >(zenity --progress --title "Tomcat Deployer" --text "Connecting to Tomcat Server..." --width=400 --height=150) >>/tmp/temp.log;
I want to transfer a file from my local machine to server and I want to achieve this placing an enter in bash file(/tmp/dep.sh) in the above command itself without opening a new session on server.
I prefer below command to transfer the file to server and I should place this in the bash script(/tmp/dep.sh) and it should run on server to copy the file from my local machine. I don't want to specify my local ip as a variable and use as source in the blow command as the script is used on other machines too and thus ip changes. And I should not transfer the file from my local to server writing a separate rsync & ssh creating one more ssh session.
rsync --rsh="sshpass -p '$PASS' ssh" '$local:$APPATH/$app.war' /tmp
Anybody can do any magic to transfer the file from local to server with the above connected ssh session with the help of above rsync or by other means and without opening new separate connection?
Thank you!
Edit 1:
Could this be achieved with single ssh session(single command)?:
rsync --rsh="sshpass -p serverpass ssh -o StrictHostKeyChecking=no" /home/user1/Desktop/app.war root#192.168.1.5:/tmp;
sshpass -p serverpass ssh -o StrictHostKeyChecking=no root#192.168.1.5 '/etc/init.d/tomcat start'

You'll want to use SSH multiplexing. This is done using the ControlMaster and ControlPath options. Here's an article on it.

Related

SCP fails almost immediately with 'lost connection' after successful ssh connection

I am writing a script to upload DLLs to a remote machine. I first use a command to ssh in and stop the service running. I then try to upload the new DLLs via scp, but it fails basically immediately with lost connection.
My entire shell script looks like this:
ssh -p 29170 DLL_Uploader#XXX.XXX.XX.XXX "powershell.exe; .\stop_file_watcher.ps1; exit";
scp -P 29170 $1 "DLL_Uploader#XXX.XXX.XX.XXX:/Bin/File Watcher Service"
ssh -p 29170 DLL_Uploader#XXX.XXX.XX.XXX "powershell.exe; .\start_file_watcher.ps1; exit";
You can see a pastebin of this scp debug output with -vvv here.

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

Create a script that write a local image file to override hard drive on a remote server using dd and netcat

I am struggling for a while trying to create a script that write a local image file to override hard drive on a remote server.
for that i am trying to use Linux dd over netcat with gzip compression.
will ssh a remote server, execute a remote dd over netcat command for listening on a specific port, and then launch a command to write an image for this remote server.
Im not sure why it is not working for me, I have many assumptions and I tried to do it on many ways, including running remote scripts on the background, or having the ssh session itself on the background - but it does not work for me from within a script.
the commands i am trying to run:
ssh remote server:
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_rsa (SERVER_IP)
on the remote server start lisen session of dd over nc on port 9023 and decompress using gunzip:
/bin/nc -l -p 9023|/bin/gunzip -c|/bin/dd bs=64k of=/dev/sda &
exit to the main server and execute:
dd if=/var/tmp/ADT/Server-full/image.gz bs=64k |pv|nc (SERVER_IP) 9023
When trying to run the commands one by one it works and the dd sessions is working. when trying to run it from a script the dd session hangs immediately.
You can redirect local input through a compressed ssh session, and use that input on the other side. You can do this directly without netcat:
ssh -C user#server 'dd of=/dev/sda' < /path/to/local.image
Add the other necessary options you need for ssh and dd.
The CompressionLevel option in man ssh should be interesting too for your use case.

send and excecute scripts via putty(ssh) connection

I have access to some machines via putty(ssh).
Sometimes I need to run scripts on those machines. Is there a simple and secure way to send files from my computer to the machine via ssh connection?
You can use scp to copy files via ssh:
scp local_file remote_machine:/target/location
You can also run command directory though ssh:
ssh remote_machine 'echo 1 > remote_file'
And even pipe to ssh then continue on remote:
cat local_file | ssh remote_machine 'cat > remote_file'

Is there a way to make rsync execute a command before beginning its transfer

I am working on a script which will be used to transfer a file (using rsync) from a remote location and then perform some basic operations on the retrieved content.
When I initially connect to the remote location (not running an rsync daemon, I'm just using rsync to retrieve the files) I am placed in a non-standard shell. In order to enter the bash shell I need to enter "run util bash". Is there a way to execute "run util bash" before rsync begins to transfer the files over?
I am open to other suggestions if there is a way to do this using scp/ftp instead of rsync.
One way is to exectue rsync from the server, instead of from the client. An ssh reverse tunnel allows us to temporarily access the local machine from the remote server.
Assume the local machine has an ssh server on port 22
Shh into the remote host while specifying a reverse tunnel that maps a port in the remote machine (in this example let us use 2222) to port 22 in our local machine
Execute your rsync command, replacing any reference to your local machine with the reverse ssh tunnel address: my-local-user#localhost
Add a port option to rsync's ssh to have it use the 2222 port.
The command:
ssh -R 2222:localhost:22 remoteuser#remotemachine << EOF
# we are on the remote server.
# we can ssh back into the box running the ssh client via ${REMOTE_PORT}
run utils bash
rsync -e "ssh -p 2222" --args /path/to/remote/source remoteuser#localhost:/path/to/local/machine/dest
EOF
Reference to pass complicated commands to ssh:
What is the cleanest way to ssh and run multiple commands in Bash?
You can achieve it using --rsync-path also. E.g rsync --rsync-path="run util bash && rsync" -e "ssh -T -c aes128-ctr -o Compression=no -x" ./tmp root#example.com:~
--rsync-path is normally used to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell’s path (e.g. –rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you’d care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate.
For more details refer

Resources