Creating cron entry on server using ssh login within shell script - bash

I need to upload a file (bash script) to a remote sever. I use the scp command. After the file has been copied to the remote server I want to create a cron entry in the crontab file on the remote server.
However, the file upload and writing the cron entry need to occur within a bash shell script so that I only need to execute the script on my local machine and the script is copied to the remote host and the cron entry is written to the crontab.
Is there a way that I can use an ssh command, within the script, that logs me into the remote server, opens the crontab file and writes the cron entry.
Any help is very welcome

I would:
extract the user's crontab with crontab -l > somefile
modify that file with the desired job
import the new crontab with crontab somefile

I just did something like this where I needed to create a multiline line crontab on a remote machine. By far the simplest solution was to pipe the content to the remote crontab command through ssh like this:
echo "$CRON_CONTENTS" | ssh username#server crontab

mailo seemed almost right, but the command would be the second argument to the ssh command, like this:
ssh username#server 'echo "* * * * * /path/to/script/" >> /etc/crontab'
Or if your system doesn't automatically load /etc/crontab you should be able to pipe to the crontab command like this:
ssh username#server 'echo "* * * * * myscript" | /usr/bin/crontab'

Say you want to copy $local to $remote on $host and add an hourly job there to run at 14 past every hour, using a single SSH session;
ssh "$host" "cat >'$remote' &&
chmod +x '$remote' &&
( crontab -l;
echo '14 * * * * $remote' ) | crontab" <"$local"
This could obviously be much more robust with proper error checking etc, but hopefully it should at least get you started.
The two keys here are that the ssh command accepts an arbitrarily complex shell script as the remote command, and gets its standard input from the local host.
(With double quotes around the script, all variables will be interpolated on the local host; so the command executed on the remote host will be something like cat >'/path/to/remote' && chmod +x '/path/to/remote' && ... With the single quotes, you could have whitespace in the file name, but I didn't put them in the crontab entry because it's so weird. If you need single quotes there as well, I believe it should work.)

You meant something like
ssh username#username.server.org && echo "* * * * * /path/to/script/" >> /etc/crontab
?

Related

Automated Bash Upload Script with Crontab Raspberry Pi Not Running

I'm attempting to have my Raspberry Pi use rysnc to upload files to an SFTP server sporadically throughout the day. To do so, I created a bash script and installed a crontab to run it every couple hours during the day. If I run the bash script, it works perfectly, but it never seems to run using crontab.
I did the following:
"sudo nano upload.sh"
Create the following bash script:
#!/bin/bash
sshpass -p "password" rsync -avh -e ssh /local/directory host.com:/remote/directory
"sudo chmod +x upload.sh"
Test running it with "./upload.sh"
Now, I have tried all the following ways to add it to crontab ("sudo crontab -e")
30 8,10,12,14,16 * * * ./upload.sh
30 8,10,12,14,16 * * * /home/picam/upload.sh
30 8,10,12,14,16 * * * bash /home/picam/upload.sh
None of these work based on the fact that new files are not uploaded. I have another bash script running using method 2 above without issue. I would appreciate any insight into what might be going wrong. I have done this on eight separate Raspberry Pi 3B that are all taking photos throughout the day. The crontab upload works on none of them.
UPDATE:
Upon logging the crontab job, I found the following error:
Host key verification failed.
rsync error: unexplained error (code 255) at rsync.c(703) [sender=3.2.3]
This error also occurred if I tried running my bash script without first connecting to the server via scp and accepting the certificate. How to get around this when calling rsync from crontab?
check if the script is working properly at all ( paste it in shell)
Check if your crond.service working properly - systemctl status crond.service.
Output should be "active (running)"
Then you can try add simply test job to cron: * * * * * echo "test" >> /path/whichyou/want/file.txt
and check if this job work properly
Thanks to logging recommendation from Gordon Davisson in comments, I was able to identify the problem.
A logging error occurred, as mentioned in the original question update above where rsync would choke on a host key verification.
My solution: tell rsync not to check host key certificates. I simply changed the upload.sh bash file the following:
#!/bin/bash
sshpass -p "password" rsync -avh -e "ssh -o StrictHostKeyChecking=no" /local/directory host.com:/remote/directory
Working perfectly now -- hope this helps someone.

Execute local script remotely and direct output back to local file

I have a shell script (script.sh) on a local linux machine that I want to run on a few remote servers. The script should take a local txt file (output.txt) as an argument and write to it from the remote server.
script.sh:
#!/bin/sh
file="$1"
echo "hello from remote server" >> $file
I have tried the following with no success:
ssh user#host "bash -s" < script.sh /path/to/output.txt
So if I'm reading this correctly...
script.sh is stored on the local machine, but you'd like to execute it on remote machines,
output.txt should get the output of the remotely executed script, but should live on the local machine.
As you've written things in your question, you're providing the name of the local output file to a script that won't be run locally. While $1 may be populated with a value, that file it references is nowhere to be seen from the perspective of where it's running.
In order to get a script to run on a remote machine, you have to get it to the remote machine. The standard way to do this would be to copy the script there:
$ scp script.sh user#remotehost:.
$ ssh user#remotehost sh ./script.sh > output.txt
Though, depending on the complexity of the script, you might be able to get away with embedding it:
$ ssh user#remotehost sh -c "$(cat script.sh)" > output.txt
The advantage of this is of course that it doesn't require disk space to be used on the remote machine. But it may be trickier to debug, and the script may function a little differently if it's run in-line like this rather than from a file. (For example, positional options will be different.)
If you REALLY want to provide the local output file as an option to the script you're running remotely, then you need to include a remote path to get to that script. For example:
script.sh:
#!/bin/sh
outputhost="${1%:*}" # trim off path, leaving host
outputpath="${1#*:}" # trim off host, leaving path
echo "Hello from $(hostname)." | ssh "$outputhost" "cat >> '$outputpath'"
Then after copying the script over, call the script with:
$ ssh user#remotehost sh ./script.sh localhostname:/path/to/output.txt
That way, script.sh running on the remote host will send its output independently, rather than through your existing SSH connection. Of course, you'll want to set up SSH keys and such to facilitate this extra connection.
You can achieve it like below:-
For saving output locally on UNIX/Linux:
ssh remote_host "bash -s script.sh" > /tmp/output.txt
The first line of your script file should be #!/bin/bash and you don't need to use bash -s in your command line. Lets try like below:-
Better always put full path for your bash file like
ssh remote_host "/tmp/scriptlocation/script.sh" > /tmp/output.txt
For testing execute any unix command first:-
ssh remote_host "/usr/bin/ls -l" > /tmp/output.txt
For saving output locally on Windows:,
ssh remote_host "script.sh" > .\output.txt
Better use plink.exe . Example
plink remote_host "script.sh" > log.txt

Why ssh fails from crontab but succeeds when executed from a command line?

I have a bash script that does ssh to a remote machine and executes a command there, like:
ssh -nxv user#remotehost echo "hello world"
When I execute the command from a command line it works fine, but it fails when is being executed as a part of crontab (errorcode=255 - cannot establish SSH connection). Details:
...
Waiting for server public key.
Received server public key and host key.
Host 'remotehost' is known and matches the XXX host key.
...
Remote: Your host key cannot be verified: unknown or invalid host key.
Server refused our host key.
Trying XXX authentication with key '...'
Server refused our key.
...
When executing locally I'm acting as a root, crontab works as root as well.
Executing 'id' from crontab and command line gives exactly the same result:
$ id
> uid=0(root) gid=0(root) groups=0(root),...
I do ssh from some local machine to the machine running crond. I have ssh key and credentials to ssh to crond machine and any other machine that the scripts connects to.
PS. Please do not ask/complain/comment that executing anything as root is bad/wrong/etc - it is not the purpose of this question.
keychain
solves this in a painless way. It's in the repos for Debian/Ubuntu:
sudo apt-get install keychain
and perhaps for many other distros (it looks like it originated from Gentoo).
This program will start an ssh-agent if none is running, and provide shell scripts that can be sourced and connect the current shell to this particular ssh-agent.
For bash, with a private key named id_rsa, add the following to your .profile:
keychain --nogui id_rsa
This will start an ssh-agent and add the id_rsa key on the first login after reboot. If the key is passphrase-protected, it will also ask for the passphrase. No need to use unprotected keys anymore! For subsequent logins, it will recognize the agent and not ask for a passphrase again.
Also, add the following as a last line of your .bashrc:
. ~/.keychain/$HOSTNAME-sh
This will let the shell know where to reach the SSH agent managed by keychain. Make sure that .bashrc is sourced from .profile.
However, it seems that cron jobs still don't see this. As a remedy, include the line above in the crontab, just before your actual command:
* * * * * . ~/.keychain/$HOSTNAME-sh; your-actual-command
I am guessing that normally when you ssh from your local machine to the machine running crond, your private key is loaded in ssh-agent and forwarded over the connection. So when you execute the command from the command line, it finds your private key in ssh-agent and uses it to log in to the remote machine.
When crond executes the command, it does not have access to ssh-agent, so cannot use your private key.
You will have to create a new private key for root on the machine running crond, and copy the public part of it to the appropriate authorized_keys file on the remote machine that you want crond to log in to.
Don't expose your SSH keys without passphrase. Use ssh-cron instead, which allows you to schedule tasks using SSH agents.
So I had a similar problem. I came here and saw various answers but with some experimentation here is how I got it work with sshkeys with passphrase, ssh-agent and cron.
First off, my ssh setup uses the following script in my bash init script.
# JFD Added this for ssh
SSH_ENV=$HOME/.ssh/environment
# start the ssh-agent
function start_agent {
echo "Initializing new SSH agent..."
# spawn ssh-agent
/usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add
}
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
When I login, I enter my passphrase once and then from then on it will use ssh-agent to authenticate me automatically.
The ssh-agent details are kept in .ssh/environment. Here is what that script will look like:
SSH_AUTH_SOCK=/tmp/ssh-v3Tbd2Hjw3n9/agent.2089; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2091; export SSH_AGENT_PID;
#echo Agent pid 2091;
Regarding cron, you can setup a job as a regular user in various ways.
If you run crontab -e as root user it will setup a root user cron. If you run as crontab -u davis -e it will add a cron job as userid davis. Likewise, if you run as user davis and do crontab -e it will create a cron job which runs as userid davis. This can be verified with the following entry:
30 * * * * /usr/bin/whoami
This will mail the result of whoami every 30 minutes to user davis. (I did a crontabe -e as user davis.)
If you try to see what keys are used as user davis, do this:
36 * * * * /usr/bin/ssh-add -l
It will fail, the log sent by mail will say
To: davis#xxxx.net
Subject: Cron <davis#hostyyy> /usr/bin/ssh-add -l
Could not open a connection to your authentication agent.
The solution is to source the env script for ssh-agent above. Here is the resulting cron entry:
55 10 * * * . /home/davis/.ssh/environment; /home/davis/bin/domythingwhichusesgit.sh
This will run the script at 10:55. Notice the leading . in the script. It says to run this script in my environment similar to what is in the .bash init script.
Yesterday I had similar problem...
I have cron job on one server, which start some action on other server, using ssh... Problem was user permissions, and keys...
in crontab I had
* * * * * php /path/to/script/doSomeJob.php
And it simply didn't work ( didnt have permissions ).
I tryed to run cron as specific user, which is connected to other server
* * * * * user php /path/to/script/doSomeJob.php
But with no effect.
Finally, i navicate to script and then execute php file, and it worked..
* * * * * cd /path/to/script/; php doSomeJob.php

How can I ssh directly to a particular directory?

I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username

How to use SSH to run a local shell script on a remote machine?

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
I created a solution that works better for me by combining the use of a heredoc from Yarek T's answer with the piped cat method from cglotr's answer along with some other tricks for non-interactive login (using sshpass), using variables from the local and remote host in the script, and enabling sudo commands. The code is longer just because it includes some additional tricks that are likely desired, but the original questioner didn't ask for them.
The problem I have with Yarek's answer is that all the warnings and commands in the heredoc print to the screen. The problem I have with cglotr's answer is that is requires a script file and a complex command with additional interaction to execute the script. With my solution, I write a script that does everything by simply calling the script with the remote host IP address as the first argument like this:
./MYSCRIPT REMOTE_IP_ADDRESS
The script to be run on the remote host is saved to a variable within the script on the local host using a heredoc so that you don't need to do any quote escaping. Then, the variable containing the script is echo piped to sshpass. Be sure to indent the commands with tabs and not spaces (you'll get spaces instead of tabs when you copy the code). Here is an example of the remote script within the local script.
!/bin/bash
# Input argument 1 should be the target host IP address (required)
RX_IP="/(\b25[0-5]|\b2[0-4][0-9]|\b[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}/"
IS_IP=$(echo $1 | sed -nr "${RX_IP}p" | wc -l)
if (( $IS_IP )); then
USERNAME=remoteuser
HOSTNAME=$1
# Export the SSH password to environment variable for sshpass and sudo.
# The space before the command prevents saving the command to history.
export SSHPASS=mypassword;
while read -r -d '' SCRIPT <<-EOS
# Enable sudo commands with the following command.
# The space before echo prevents saving the command to history.
echo $SSHPASS | sudo -Sv
# Do stuff here. Escape variables to be be accessed on the remote host.
# For example, escape print variable in an awk command:
# This command lists all USB block device partitions.
ls -l /dev /dev/mapper | awk '/^b/ && /sd[a-z][1-9]/ {print \$10}'
exit
EOS
echo "$SCRIPT" | sshpass -e ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${USERNAME}#${HOSTNAME} &>/dev/null
echo 'DONE'
else
echo "Missing IP address of target host."
echo "Usage: ./SCRIPT_NAME IP_ADDRESS
fi
You need to install sshpass on the local host like this (for Debian based distros).
sudo apt install sshpass
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

Resources