Run local Perl script on remote server through expect - expect

I have a Perl script on my local machine, and I want to run it on a remote server. The following command works fine:
ssh user#ipaddress "perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl
The thing is that a prompt shows up to ask me for the password, and I don't want that.
I looked around the net, and found 3 solutions:
Using a public/private key authentication -> Not ok in my case
Using sshpass -> Not in my company's 'official' repo so cannot install it
Using expect
I followed this page to create my expect script (I'm new to expect): How to use bash/expect to check if an SSH login works, I took the script from the correct answer, and replaced the 3 first lines with #!/usr/bin/expect -f to have an expect script.
Then I ran
./ssh.exp user password ipaddress "perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl
And I have a timeout error. It was as if I ran ./ssh.exp user password ipaddress "perl"
I also tried putting quotes like
./ssh.exp user password ipaddress '"perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl'
But I have a /path/to/local/script.pl not found error.
So can anyone help me figure out how to run a script through expect ? Thanks a lot.

Ok, I found it myself.
Still using this answer How to use bash/expect to check if an SSH login works but I replaced the 6th line with
set pid [ spawn -noecho sh -c "ssh $1#$3 $4 < /path/to/local/script.pl" ]
And everything works like a charm.

Related

Server asking terminal type prevents me from auto-running commands

I wish to ssh into a server in my department, cd to a directory automatically, and hopefully be able to auto-run other commands if I want. I have tried several ways that have worked for lots of people on stack-overflow, but they did not work for me.
I have tried the methods from the following threads:
Can I ssh somewhere, run some commands, and then leave myself a prompt?
Run ssh and immediately execute command
In particular, I have tried:
1. ssh one-liner
ssh -t user#domain.com 'cd /some/path; bash -l'
and
2. using the expect script
#!/usr/bin/expect -f
spawn ssh $argv
send "cd /some/path\n"
interact
Both codes look fine, and have worked for people in the threads. However, they did not work for me.
The problem, I expect, lies in the fact that the department server asks me automatically about my terminal type as I login, preventing my auto-commands to be run properly.
$ ssh -t user#domain.com 'cd /some/path; bash -l'
Terminal type? [xterm-256color]
After hitting , it takes me into the home directory as if I haven't cd yet.
The second way gives a similar result.
How can I get over this? Thank you very much in advance!

How to ssh in a non-interactive bash script with the password [duplicate]

I need to create a script that automatically inputs a password to OpenSSH ssh client.
Let's say I need to SSH into myname#somehost with the password a1234b.
I've already tried...
#~/bin/myssh.sh
ssh myname#somehost
a1234b
...but this does not work.
How can I get this functionality into a script?
First you need to install sshpass.
Ubuntu/Debian: apt-get install sshpass
Fedora/CentOS: yum install sshpass
Arch: pacman -S sshpass
Example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM
Custom port example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM:2400
Notes:
sshpass can also read a password from a file when the -f flag is passed.
Using -f prevents the password from being visible if the ps command is executed.
The file that the password is stored in should have secure permissions.
After looking for an answer to the question for months, I finally found a better solution: writing a simple script.
#!/usr/bin/expect
set timeout 20
set cmd [lrange $argv 1 end]
set password [lindex $argv 0]
eval spawn $cmd
expect "password:"
send "$password\r";
interact
Put it to /usr/bin/exp, So you can use:
exp <password> ssh <anything>
exp <password> scp <anysrc> <anydst>
Done!
Use public key authentication: https://help.ubuntu.com/community/SSH/OpenSSH/Keys
In the source host run this only once:
ssh-keygen -t rsa # ENTER to every field
ssh-copy-id myname#somehost
That's all, after that you'll be able to do ssh without password.
You could use an expects script. I have not written one in quite some time but it should look like below. You will need to head the script with #!/usr/bin/expect
#!/usr/bin/expect -f
spawn ssh HOSTNAME
expect "login:"
send "username\r"
expect "Password:"
send "password\r"
interact
Variant I
sshpass -p PASSWORD ssh USER#SERVER
Variant II
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
sshpass + autossh
One nice bonus of the already-mentioned sshpass is that you can use it with autossh, eliminating even more of the interactive inefficiency.
sshpass -p mypassword autossh -M0 -t myusername#myserver.mydomain.com
This will allow autoreconnect if, e.g. your wifi is interrupted by closing your laptop.
With a jump host
sshpass -p `cat ~/.sshpass` autossh -M0 -Y -tt -J me#jumphost.mydomain.com:22223 -p 222 me#server.mydomain.com
sshpass with better security
I stumbled on this thread while looking for a way to ssh into a bogged-down server -- it took over a minute to process the SSH connection attempt, and timed out before I could enter a password. In this case, I wanted to be able to supply my password immediately when the prompt was available.
(And if it's not painfully clear: with a server in this state, it's far too late to set up a public key login.)
sshpass to the rescue. However, there are better ways to go about this than sshpass -p.
My implementation skips directly to the interactive password prompt (no time wasted seeing if public key exchange can happen), and never reveals the password as plain text.
#!/bin/sh
# preempt-ssh.sh
# usage: same arguments that you'd pass to ssh normally
echo "You're going to run (with our additions) ssh $#"
# Read password interactively and save it to the environment
read -s -p "Password to use: " SSHPASS
export SSHPASS
# have sshpass load the password from the environment, and skip public key auth
# all other args come directly from the input
sshpass -e ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no "$#"
# clear the exported variable containing the password
unset SSHPASS
I don't think I saw anyone suggest this and the OP just said "script" so...
I needed to solve the same problem and my most comfortable language is Python.
I used the paramiko library. Furthermore, I also needed to issue commands for which I would need escalated permissions using sudo. It turns out sudo can accept its password via stdin via the "-S" flag! See below:
import paramiko
ssh_client = paramiko.SSHClient()
# To avoid an "unknown hosts" error. Solve this differently if you must...
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# This mechanism uses a private key.
pkey = paramiko.RSAKey.from_private_key_file(PKEY_PATH)
# This mechanism uses a password.
# Get it from cli args or a file or hard code it, whatever works best for you
password = "password"
ssh_client.connect(hostname="my.host.name.com",
username="username",
# Uncomment one of the following...
# password=password
# pkey=pkey
)
# do something restricted
# If you don't need escalated permissions, omit everything before "mkdir"
command = "echo {} | sudo -S mkdir /var/log/test_dir 2>/dev/null".format(password)
# In order to inspect the exit code
# you need go under paramiko's hood a bit
# rather than just using "ssh_client.exec_command()"
chan = ssh_client.get_transport().open_session()
chan.exec_command(command)
exit_status = chan.recv_exit_status()
if exit_status != 0:
stderr = chan.recv_stderr(5000)
# Note that sudo's "-S" flag will send the password prompt to stderr
# so you will see that string here too, as well as the actual error.
# It was because of this behavior that we needed access to the exit code
# to assert success.
logger.error("Uh oh")
logger.error(stderr)
else:
logger.info("Successful!")
Hope this helps someone. My use case was creating directories, sending and untarring files and starting programs on ~300 servers as a time. As such, automation was paramount. I tried sshpass, expect, and then came up with this.
# create a file that echo's out your password .. you may need to get crazy with escape chars or for extra credit put ASCII in your password...
echo "echo YerPasswordhere" > /tmp/1
chmod 777 /tmp/1
# sets some vars for ssh to play nice with something to do with GUI but here we are using it to pass creds.
export SSH_ASKPASS="/tmp/1"
export DISPLAY=YOURDOINGITWRONG
setsid ssh root#owned.com -p 22
reference: https://www.linkedin.com/pulse/youre-doing-wrong-ssh-plain-text-credentials-robert-mccurdy?trk=mp-reader-card
This is how I login to my servers:
ssp <server_ip>
alias ssp='/home/myuser/Documents/ssh_script.sh'
cat /home/myuser/Documents/ssh_script.sh
ssp:
#!/bin/bash
sshpass -p mypassword ssh root#$1
And therefore:
ssp server_ip
This is basically an extension of abbotto's answer, with some additional steps (aimed at beginners) to make starting up your server, from your linux host, very easy:
Write a simple bash script, e.g.:
#!/bin/bash
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no <YOUR_USERNAME>#<SEVER_IP>
Save the file, e.g. 'startMyServer', then make the file executable by running this in your terminal:
sudo chmod +x startMyServer
Move the file to a folder which is in your 'PATH' variable (run 'echo $PATH' in your terminal to see those folders). So for example move it to '/usr/bin/'.
And voila, now you are able to get into your server by typing 'startMyServer' into your terminal.
P.S. (1) this is not very secure, look into ssh keys for better security.
P.S. (2) SMshrimant answer is quite similar and might be more elegant to some. But I personally prefer to work in bash scripts.
I am using below solution but for that you have to install sshpass If its not already installed, install it using sudo apt install sshpass
Now you can do this,
sshpass -p *YourPassword* ssh root#IP
You can create a bash alias as well so that you don't have to run the whole command again and again.
Follow below steps
cd ~
sudo nano .bash_profile
at the end of the file add below code
mymachine() { sshpass -p *YourPassword* ssh root#IP }
source .bash_profile
Now just run mymachine command from terminal and you'll enter your machine without password prompt.
Note:
mymachine can be any command of your choice.
If security doesn't matter for you here in this task and you just want to automate the work you can use this method.
If you are doing this on a Windows system, you can use Plink (part of PuTTY).
plink your_username#yourhost -pw your_password
I have a better solution that inclueds login with your account than changing to root user.
It is a bash script
http://felipeferreira.net/index.php/2011/09/ssh-automatic-login/
The answer of #abbotto did not work for me, had to do some things differently:
yum install sshpass changed to - rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/sshpass-1.05-1.el6.x86_64.rpm
the command to use sshpass changed to - sshpass -p "pass" ssh user#mysite -p 2122
I managed to get it working with that:
SSH_ASKPASS="echo \"my-pass-here\""
ssh -tt remotehost -l myusername
This works:
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
BUT!!! If you have an error like below, just start your script with expect, but not bash, as shown here: expect myssh.sh
instead of bash myssh.sh
/bin/myssh.sh: 2: spawn: not found /bin/myssh.sh: 3: expect: not found /bin/myssh.sh: 4: send: not found /bin/myssh.sh: 5: expect: not found /bin/myssh.sh: 6: send: not found
I got this working as follows
.ssh/config was modified to eliminate the yes/no prompt - I'm behind a firewall so I'm not worried about spoofed ssh keys
host *
StrictHostKeyChecking no
Create a response file for expect i.e. answer.expect
set timeout 20
set node [lindex $argv 0]
spawn ssh root#node service hadoop-hdfs-datanode restart
expect "*?assword {
send "password\r" <- your password here.
interact
Create your bash script and just call expect in the file
#!/bin/bash
i=1
while [$i -lt 129] # a few nodes here
expect answer.expect hadoopslave$i
i=[$i + 1]
sleep 5
done
Gets 128 hadoop datanodes refreshed with new config - assuming you are using a NFS mount for the hadoop/conf files
Hope this helps someone - I'm a Windows numpty and this took me about 5 hours to figure out!
In the example bellow I'll write the solution that I used:
The scenario: I want to copy file from a server using sh script:
#!/usr/bin/expect
$PASSWORD=password
my_script=$(expect -c "spawn scp userName#server-name:path/file.txt /home/Amine/Bureau/trash/test/
expect \"password:\"
send \"$PASSWORD\r\"
expect \"#\"
send \"exit \r\"
")
echo "$my_script"
Solution1:use sshpass
#~/bin/myssh.sh
sshpass -p a1234b ssh myname#somehost
You can install by
# Ubuntu/Debian
$ sudo apt-get install sshpass
# Red Hat/Fedora/CentOS
$ sudo yum install sshpass
# Arch Linux
$ sudo pacman -S sshpass
#OS X
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
or download the Source Code from here, then
tar xvzf sshpass-1.08.tar.gz
cd sshpass-1.08.tar.gz
./configure
sudo make install
Solution2:Set SSH passwordless login
Let's say you need to SSH into bbb#2.2.2.2(Remote server B) with the password 2b2b2b from aaa#1.1.1.1(Client server A).
Generate the public key(.ssh/id_rsa.pub) and private key(.ssh/id_rsa) in A with the following commands
ssh-keygen -t rsa
[Press enter key]
[Press enter key]
[Press enter key]
Use the following command to distribute the generated public key(.ssh/id_rsa.pub) to server B under bbb‘s .ssh directory as a file name authorized_keys
ssh-copy-id bbb#2.2.2.2
You need to enter a password for the first ssh login, and it will be logged in automatically in the future, no need to enter it again!
ssh bbb#2.2.2.2 [Enter]
2b2b2b
And then your script can be
#~/bin/myssh.sh
ssh myname#somehost
Use this script tossh within script, First argument is the hostname and second will be the password.
#!/usr/bin/expect
set pass [lindex $argv 1]
set host [lindex $argv 0]
spawn ssh -t root#$host echo Hello
expect "*assword: "
send "$pass\n";
interact"
To connect remote machine through shell scripts , use below command:
sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USERNAME#IPADDRESS
where IPADDRESS, USERNAME and PASSWORD are input values which need to provide in script, or if we want to provide in runtime use "read" command.
This should help in most of the cases (you need to install sshpass first!):
#!/usr/bin/bash
read -p 'Enter Your Username: ' UserName;
read -p 'Enter Your Password: ' Password;
read -p 'Enter Your Domain Name: ' Domain;
sshpass -p "$Password" ssh -o StrictHostKeyChecking=no $UserName#$Domain
In linux/ubuntu
ssh username#server_ip_address -p port_number
Press enter and then enter your server password
if you are not a root user then add sudo in starting of command

Remote sudo, execute a command and write output in local terminal

What I try to do, I connect to a remote server as a normal user with sudo right, then sudo to root, and execute a command & see output in my local terminal. I wrote a small script like this:
#!/bin/bash
my_argument=$1
ssh -t username#hostname 'sudo su -; /path_to_my_script $1'
I type the password twice (one for ssh, the other for sudo), but I see nothing in my local terminal, and script looks terminated in remote host. I believe second problem could be resolved by using exit, but I am a little bit confused how I can get this output to my local terminal.
Thanks
String inside '' is taken literally. So, you are passing the dollar sign and 1 as a parameter to the script. If you want the string to be interpreted, place it inside "", like:
ssh -t username#hostname "sudo /path_to_my_script $1"

Running interactive Bash commands over ssh

I am trying to automate my server provisioning process using chef. Since I don't want to run chef as root, I need a chef/deployer user. But I don't want to create this user manually. Instead, I want to automate this step. So I took a shot at scripting it but ran into an issue:
The problem is that if I run
>ssh root#123.345.345.567 '/bin/bash -e' < ./add_user.sh
where add_user contains
//..if the username doesnt exist already
adduser $USERNAME --gecos ''
I never see the output or the prompts of the command.
Is there a way to run interactive commands in this way?
Is there a better way to add users in an automated fashion?
Try this:
ssh -t root#<ipaddress> adduser $USERNAME --gecos
Not sure why you have a $ in the IP address in your original example - that's likely to cause ssh to fail to connect, but since you didn't indicate that sort of failure, I'm assuming that's just a typo.
Since add_user.sh is just a simple command, there's no need for the added complexity of explicitly running bash or the redirection, just run the adduser command via ssh.
And lastly, since $USERNAME is likely defined on the local end, and not on the remote end, even if you could get your original command to "do what you said", you'd end up running adduser --gecos on the remote end, which isn't what you intended.
Try using :
ssh -t root#$123.345.345.567 '/bin/bash -e' < ./add_user.sh
instead.

How to use SSH to run a local shell script on a remote machine?

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
I created a solution that works better for me by combining the use of a heredoc from Yarek T's answer with the piped cat method from cglotr's answer along with some other tricks for non-interactive login (using sshpass), using variables from the local and remote host in the script, and enabling sudo commands. The code is longer just because it includes some additional tricks that are likely desired, but the original questioner didn't ask for them.
The problem I have with Yarek's answer is that all the warnings and commands in the heredoc print to the screen. The problem I have with cglotr's answer is that is requires a script file and a complex command with additional interaction to execute the script. With my solution, I write a script that does everything by simply calling the script with the remote host IP address as the first argument like this:
./MYSCRIPT REMOTE_IP_ADDRESS
The script to be run on the remote host is saved to a variable within the script on the local host using a heredoc so that you don't need to do any quote escaping. Then, the variable containing the script is echo piped to sshpass. Be sure to indent the commands with tabs and not spaces (you'll get spaces instead of tabs when you copy the code). Here is an example of the remote script within the local script.
!/bin/bash
# Input argument 1 should be the target host IP address (required)
RX_IP="/(\b25[0-5]|\b2[0-4][0-9]|\b[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}/"
IS_IP=$(echo $1 | sed -nr "${RX_IP}p" | wc -l)
if (( $IS_IP )); then
USERNAME=remoteuser
HOSTNAME=$1
# Export the SSH password to environment variable for sshpass and sudo.
# The space before the command prevents saving the command to history.
export SSHPASS=mypassword;
while read -r -d '' SCRIPT <<-EOS
# Enable sudo commands with the following command.
# The space before echo prevents saving the command to history.
echo $SSHPASS | sudo -Sv
# Do stuff here. Escape variables to be be accessed on the remote host.
# For example, escape print variable in an awk command:
# This command lists all USB block device partitions.
ls -l /dev /dev/mapper | awk '/^b/ && /sd[a-z][1-9]/ {print \$10}'
exit
EOS
echo "$SCRIPT" | sshpass -e ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${USERNAME}#${HOSTNAME} &>/dev/null
echo 'DONE'
else
echo "Missing IP address of target host."
echo "Usage: ./SCRIPT_NAME IP_ADDRESS
fi
You need to install sshpass on the local host like this (for Debian based distros).
sudo apt install sshpass
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

Resources