bash how to pass variable to the remote bash script - bash

I have local bash script which is used to invoke a bash script in the remote server and get some reports from remote server.
The way I call this script currently in local_script.sh is:
ssh remoteuse#ip "/bin/bash remote_script.sh"
Now, I want to set a date variable in local_script.sh file and variable needs to available in remote_script.sh files also.
Please give some ideas.
EDIT:
Please see my test script:
[user#localserver]$ ssh remoteusr#ip "/bin/bash remote_script.sh $test_var"
And my remote script:
[user#remoteserver]$ cat remote_script.sh
#!/bin/bash
echo $test_var > test_var.log
But test_var.log file on remote server is empty after running the script

The remote server doesn't know you local variables, you can only pass the value of the variable from local to remote with an extra argument at the ssh line:
ssh remoteuse#ip "/bin/bash remote_script.sh $variable"

You have to add the variable to the environment of the executed command. That can be done with the var=value cmd syntax.
But since the line you pass to ssh will be evaluated on the remote server, you must ensure the variable is in a format that is reusable as shell input. Two ways come to mind depending on your version of bash:
With bash 4.4 or newer, you can use the Q operator in ${parameter#operator}:
local script:
foo="abc'def \"123\" *"
ssh remoteuse#ip "foo=${foo#Q} /bin/bash remote.sh"
remote script:
printf '<%s>\n' "$foo"
output:
$ ./local_script.sh
<abc'def "123" *>
If you don't have bash 4.4 or newer, you can use the %q directive to printf:
ssh remoteuse#ip "foo=$(printf '%q' "$foo") /bin/bash remote.sh"

Related

Bash :: SU command removes Variables from SCP Command?

I have a Bash (ver 4.4.20(1)) script running on Ubuntu (ver 18.04.6 LTS) that generates an SCP error. Yet, when I run the offending command on the command line, the same line runs fine.
The script is designed to SCP a file from a remote machine and copy it to /tmp on the local machine. One caveat is that the script must be run as root (yes, I know that's bad, this is a proof-of-concept thing), but root can't do passwordless SCP in my enviroment. User me can so passwordless SCP, so when root runs the script, it must "borrow" me's public SSH key.
Here's my script, slightly abridged for SO:
#!/bin/bash
writeCmd() { printf '%q ' "$#"; printf '\n'; }
printf -v date '%(%Y%m%d)T' -1
user=me
host=10.10.10.100
file=myfile
target_dir=/path/to/dir/$date
# print command to screen so I can see what is being submitted to OS:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
Output is:
su - me -c scp-Cme#10.10.10.100://.txt/tmp/.
It looks like the ' ' character are not being printed, but for the moment, I'll assume that is a display thing and not the root of the problem. What's more serious is that I don't see my variables in the actual SCP command.
What gives? Why would the variables be ignored? Does the su part of the command interfere somehow? Thank you.
(NOTE: This post has been reedited from its earlier form, if you wondering why the below comments seem off-topic.)
When you run:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
you'll see that its output is (something equivalent to -- may change version-to-version):
su - me -c scp\ -C\ me#\$host:/\$target_dir/\$file.txt\ /tmp/.
Importantly, none of the variables have been substituted yet (and they're emitted escaped to show that they won't be substituted until after su runs).
This is important, because only variables that have been exported -- becoming environment variables instead of shell variables -- survive a process boundary, such as that caused by the shell starting the external su command, or the one caused by su starting a new and separate shell interpreter as the target user account. Consequently, the new shell started by su doesn't have access to the variables, so it substitutes them with empty values.
Sometimes, you can solve this by exporting your variables: export host target_dir file, and if su passes the environment through that'll suffice. However, that's a pretty big "if": there are compelling security reasons not to pass arbitrary environment variables across a privilege boundary.
The safer way to do this is to build a correctly-escaped command with the variables already substituted:
#!/usr/bin/env bash
# ^^^^- needs to be bash, not sh, to work reliably
cmd=( scp -C "me#$host:/$target_dir/$file.txt" /tmp/. )
printf -v cmd_v '%q ' "${cmd[#]}"
su - me -c "$cmd_v"
Using printf %q is protection against shell injection attacks -- ensuring that a target_dir named /tmp/evil/$(rm -rf ~) doesn't delete your home directory.

Append to a remote environment variable for a command started via ssh on RO filesystem

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Source environment variables and execute bash before running local script on remote machine [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm trying to execute the remote local script with ssh connection. I've read a document about the syntax of it. But my issue is that, before running the script, I need to execute bash and source environment variables.
This looks appropriate for me but it has not a source command :
ssh [user]#[server] 'bash -s' < [local_script]
I've tried such a thing with EOF but it didn't work for me too :
#!/bin/bash
/usr/bin/ssh "$user#$$host" <<EOF
bash -s
source /dir/to/profile/.profile
source /dir/to/env/set/env.sh
/path/to/script/script.sh stop
EOF
Do you have an idea for this type of implementation of remote commands ? I have to source profile before the environment settings otherwise it gives an exception. But the main problem is about source.
Maybe it was an easy question but I don't have any ideas. Thank you in advance for your all answers.
eval can accomplish this for you:
eval $(cat /path/to/environment) ./script.sh
You can source multiple files this way too if you want if you know there
path:
eval $(cat /path/to/environment1 /path/to/environment2) ./script.sh
Or iterate over a directory:
eval $(cat $(find -type f /path/to/environments)) ./script.sh
Stick SSH in front of it if you're doing this remotely to solve your specific problem:
# note the quotes otherwise we'll source our local environment
ssh user#host "'eval $(cat /path/to/environment)' ./remote_script.sh"
# If it's a local environment you want to sort, then do the same
# command without the quotes:
ssh user#host "eval $(cat /path/to/environment)" ./remote_script.sh
If you want to source a remote environment into your own then use eval
locally as so:
eval "$(ssh user#host cat /path/to/environment)" ./local_script.sh
This alls you to source an external file setting it's environment variables in the same forked instance that will calls your script (making them available).
Consider a script file that looks like this:
#!/bin/sh
echo "$VAR1"
echo "$VAR2"
test_function
Now consider your environment file looks like this:
# Environment Variables
VAR1=foo
VAR2=bar
test_function()
{
echo "hello world"
}
You'd see the output if you use the eval example:
foo
bar
hello world
Alternatively, if you just open up your script you wrote, you can source
these environment variables directly from within it and then you can just
call the script normally without any tricks:
#!/bin/sh
# Source our environment by starting with period an then following
# through with the full path to the environment file. You can also use
# the 'source' keyword here too instead of the period (.).
. /path/to/environment
echo "$VAR1"
echo "$VAR2"
test_function
I know it is old but just wanted to add that it can be done without an extra file - use '\' to escape local variables and remote command substitution - ie:
ssh me#somehost "RMTENV=\$(ls /etc/profile) && source \$RMTENV"
I use this to execute remote java commands and need the ENV to find java.
I fixed the problem by writing another template script that sources the environment variables and runs the script:
PROFILE=/dir/to/profile/.profile
source $PROFILE
cd /dir/to/script
/bin/bash script $1
If you use the source command with bash shell, #!/bin/bash doesn't work for the source command.

call shell-sourced function remotely by ssh

I have function that is sourced through the .bashrc file on remote host A.
If i use "which" on remote host A , i`m getting function body as output.
I need to run it through ssh remotely from another host B.
Currently , all my tries are ending with "command not found error".
I already tried to pass to
ssh A "source /home/user/.bashrc && function "
, this not help.
Also tried force ssh to assing pseudo-tty with -t key. SHELL on both hosts is bash.
ssh localhost on host A still keeps function status available.
Output :
[user#hostA ~]$ which status
status is a function
status ()
{
dos -s $*
}
[user#hostB ~]$ ssh hostA " source /home/user/deploy/bin/_bashrc && status all "
ls: : No such file or directory
bash: status: command not found
Basically, you can't. To do that you need to copy the sourced file on the remote host and source it in there. Note, that your file may be sourcing in some other files as well… This is almost like running local program on the remote host.
The trick is to get the remote end to properly load your file containing the function into the shell environment.
I found with bash that the following works...
Put your function into .bashrc on the remote:
foo_func()
{
echo Hello World
}
Then on the local side:
ssh user#remote bash -l -c foo_func
The bash -l instructs bash to run as a login shell (sourcing startup files) and then the -c tells the shell to execute the string foo_func.

How to force ssh to execute bash instead of the user default on the remote machine?

I want to execute a bash script with ssh but when I try this it's using ksh which is the user's default shell.
I can't change that default.
So, how can I trick ssh to execute my script with bash instead of the default shell?
Make this the first line of your script:
#!/usr/bin/env bash
Edit: As per this, the utility of /usr/bin/env is dubious. So, you probably want:
#!/bin/bash
Replace /bin/bash with the actual path of bash executable.
You can call your script explicitly with bash:
ssh <ssh-opts> bash <scriptname>
This way there will be a ksh executed at login, but inside ksh you start a bash executing your script.

Resources