I am wonder if there is a way to run a local script on a remote server without copying the local script to the server.
I know I can use scp to copy a script to a server then use ssh to run the script but I am trying to avoid copying the script and all of it's dependencies on the remote host.
Let's say I have localscript.sh on localhost.com and the target host is remotehost.com.
How do you run localscript.sh from the localhost.com and have it execute on the remotehost.com?
Since the script is bash you can invoke the shell on the remote machine and read the script from local machine:
ssh user#remotehost.com "bash -s" < localscript.sh
ssh root#remoteServer "bash -s" < /path_to_File/file.sh
Related
What I am trying to do is to login to remote Linux server using SSH from my Windows machine by running a shell script via git bash.
I would like to write a script which will be used by an user with basic IT knowledge. This script will execute a bunch of commands on the remote machine, so it does need to establish a SSH connection.
What I have tried to write in this script so far is:
ssh username#ip <password>
EDIT: You should consider installing Jenkins on the remote system.
Running a command on a remote system can be done via:
ssh user#host command arg1 arg2
If you omit the password, a password prompt will appear, to get around the password prompt, you should consider setting passwordless SSH login. https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Rephrasing what you said, you want to write a script (namely script1.sh) which does the following (in this order):
Start a ssh connection with a remote
Execute some commands, possibly written in another script (namely script2.sh)
Close the connection / Keep it open for additional command line commands
If you want to put the remote commands on script2.sh instead of listing them in script1.sh, you need to consider that script2.sh must be on the remote server. If not, then you may consider to delegate to script1.sh the creation/copy of script2.sh on the remote machine. You may copy it in a temporary folder.
In this case, my suggestion is to write script1.sh as follows:
Copy of script2.sh with
scp /path/to/local/script2.sh user#host:/path/to/remote/script2.sh
Switch bash sheel to remote shell with
ssh user#host
Execution of script2.sh
sh /path/to/remote/script2.sh
Instead, if you prefer to list everything in just one script file, you may want to write:
Switch bash sheel to remote shell with
ssh user#host
Execution of commands
echo "This command has been executed on the remote server"
echo "This command has also been executed on the remote server"
..
Possibly closing the SSH connection to prevent that the user execute additional commands
You may consider to copy the ssh-keys on the remote server so to avoid password prompts. More information here.
I need to run a bash script that takes some parameters from server-1 and then from my local server where I ran the script with
ssh user#server-1 bash -s <script.sh
I then need to use those parameters to be executed with all kind of commands on my local server and also server-2 is involved. But the script will still be running on server-1 because of
ssh user#server-1 bash -s <script.sh
Maybe I can use 2 scripts but I want them to be only on local server. and putting in the script more commands after SSH doesn't seem to be working.
I would place the script on the remote server and remote execute it via SSH.
If the script should change over time, then break it up into 2-3 steps
1. gather any additional parameter from remote machine
2. copy script to remote machine using scp
3. ssh to "remote execute" script on remote machine
Am not sure what parameter you need from the remote system.
I would try to hand it over via command line options to the script in #3.
Otherwise "hack"/patch it in before #2.
I am writing a bash script that involves ssh-ing into a remote host and running commands there. That in itself is not a problem. The issue is that I want to run a command which doesn't exist locally, only on the remote. The script fails with bash: line 1: type: remote_only_command: not found, even though it's successfully connecting to the remote host and can run basic commands without issue.
I can run the command on the remote host if I ssh in and run it manually. I've tried writing a separate bash script on the remote host and running that through the script (sh remote_script.sh), but that gets the same command not found error.
ssh $REMOTE var=$var 'bash -s' << 'EOF'
ls # works no problem, lists files on the remote server
remote_only_command # bash: line 1: type: remote_only_command: not found
EOF
Is it possible to run a command that is only accessible from the remote host and not locally where the script is being run?
I think this is the way it should work, as the command is only executed on the remote host. But i suspect your problem is the environment, which is NOT permitted over ssh. Try to use the complete path to the command, eg:
/path/to/remote_command
I am working on a script which will be used to transfer a file (using rsync) from a remote location and then perform some basic operations on the retrieved content.
When I initially connect to the remote location (not running an rsync daemon, I'm just using rsync to retrieve the files) I am placed in a non-standard shell. In order to enter the bash shell I need to enter "run util bash". Is there a way to execute "run util bash" before rsync begins to transfer the files over?
I am open to other suggestions if there is a way to do this using scp/ftp instead of rsync.
One way is to exectue rsync from the server, instead of from the client. An ssh reverse tunnel allows us to temporarily access the local machine from the remote server.
Assume the local machine has an ssh server on port 22
Shh into the remote host while specifying a reverse tunnel that maps a port in the remote machine (in this example let us use 2222) to port 22 in our local machine
Execute your rsync command, replacing any reference to your local machine with the reverse ssh tunnel address: my-local-user#localhost
Add a port option to rsync's ssh to have it use the 2222 port.
The command:
ssh -R 2222:localhost:22 remoteuser#remotemachine << EOF
# we are on the remote server.
# we can ssh back into the box running the ssh client via ${REMOTE_PORT}
run utils bash
rsync -e "ssh -p 2222" --args /path/to/remote/source remoteuser#localhost:/path/to/local/machine/dest
EOF
Reference to pass complicated commands to ssh:
What is the cleanest way to ssh and run multiple commands in Bash?
You can achieve it using --rsync-path also. E.g rsync --rsync-path="run util bash && rsync" -e "ssh -T -c aes128-ctr -o Compression=no -x" ./tmp root#example.com:~
--rsync-path is normally used to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell’s path (e.g. –rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you’d care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate.
For more details refer
I have written a bash script which I should run on the remote server(ubuntu) with GUI(zenity) interface and I will issue below command on the local machine.
sshpass -p $PASS ssh root#$SERVER 'bash' < /tmp/dep.sh | tee >(zenity --progress --title "Tomcat Deployer" --text "Connecting to Tomcat Server..." --width=400 --height=150) >>/tmp/temp.log;
I want to transfer a file from my local machine to server and I want to achieve this placing an enter in bash file(/tmp/dep.sh) in the above command itself without opening a new session on server.
I prefer below command to transfer the file to server and I should place this in the bash script(/tmp/dep.sh) and it should run on server to copy the file from my local machine. I don't want to specify my local ip as a variable and use as source in the blow command as the script is used on other machines too and thus ip changes. And I should not transfer the file from my local to server writing a separate rsync & ssh creating one more ssh session.
rsync --rsh="sshpass -p '$PASS' ssh" '$local:$APPATH/$app.war' /tmp
Anybody can do any magic to transfer the file from local to server with the above connected ssh session with the help of above rsync or by other means and without opening new separate connection?
Thank you!
Edit 1:
Could this be achieved with single ssh session(single command)?:
rsync --rsh="sshpass -p serverpass ssh -o StrictHostKeyChecking=no" /home/user1/Desktop/app.war root#192.168.1.5:/tmp;
sshpass -p serverpass ssh -o StrictHostKeyChecking=no root#192.168.1.5 '/etc/init.d/tomcat start'
You'll want to use SSH multiplexing. This is done using the ControlMaster and ControlPath options. Here's an article on it.