Append to a remote environment variable for a command started via ssh on RO filesystem - bash

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?

How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux

WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Related

Bash :: SU command removes Variables from SCP Command?

I have a Bash (ver 4.4.20(1)) script running on Ubuntu (ver 18.04.6 LTS) that generates an SCP error. Yet, when I run the offending command on the command line, the same line runs fine.
The script is designed to SCP a file from a remote machine and copy it to /tmp on the local machine. One caveat is that the script must be run as root (yes, I know that's bad, this is a proof-of-concept thing), but root can't do passwordless SCP in my enviroment. User me can so passwordless SCP, so when root runs the script, it must "borrow" me's public SSH key.
Here's my script, slightly abridged for SO:
#!/bin/bash
writeCmd() { printf '%q ' "$#"; printf '\n'; }
printf -v date '%(%Y%m%d)T' -1
user=me
host=10.10.10.100
file=myfile
target_dir=/path/to/dir/$date
# print command to screen so I can see what is being submitted to OS:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
Output is:
su - me -c scp-Cme#10.10.10.100://.txt/tmp/.
It looks like the ' ' character are not being printed, but for the moment, I'll assume that is a display thing and not the root of the problem. What's more serious is that I don't see my variables in the actual SCP command.
What gives? Why would the variables be ignored? Does the su part of the command interfere somehow? Thank you.
(NOTE: This post has been reedited from its earlier form, if you wondering why the below comments seem off-topic.)
When you run:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
you'll see that its output is (something equivalent to -- may change version-to-version):
su - me -c scp\ -C\ me#\$host:/\$target_dir/\$file.txt\ /tmp/.
Importantly, none of the variables have been substituted yet (and they're emitted escaped to show that they won't be substituted until after su runs).
This is important, because only variables that have been exported -- becoming environment variables instead of shell variables -- survive a process boundary, such as that caused by the shell starting the external su command, or the one caused by su starting a new and separate shell interpreter as the target user account. Consequently, the new shell started by su doesn't have access to the variables, so it substitutes them with empty values.
Sometimes, you can solve this by exporting your variables: export host target_dir file, and if su passes the environment through that'll suffice. However, that's a pretty big "if": there are compelling security reasons not to pass arbitrary environment variables across a privilege boundary.
The safer way to do this is to build a correctly-escaped command with the variables already substituted:
#!/usr/bin/env bash
# ^^^^- needs to be bash, not sh, to work reliably
cmd=( scp -C "me#$host:/$target_dir/$file.txt" /tmp/. )
printf -v cmd_v '%q ' "${cmd[#]}"
su - me -c "$cmd_v"
Using printf %q is protection against shell injection attacks -- ensuring that a target_dir named /tmp/evil/$(rm -rf ~) doesn't delete your home directory.

How to properly do GIT pull in SH?

I need to create script which will do pull request. Currently my code is:
#!/bin/sh
# -*- coding: utf-8 -*-
cd "/var/www/project"
GIT_SSH_COMMAND='ssh -i /var/www/deploy/access-key -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
git fetch origin
git reset --hard origin/dev
The thing is that each time I get:
Could not create directory '/var/www/.ssh'. Failed to add the RSA host
key for IP address '104.192.143.1' to the list of known hosts
(/var/www/.ssh/known_hosts). git#bitbucket.org: Permission denied
(publickey). fatal: Could not read from remote repository.
Under my normal user key works fine. Is it possible somehow specify know_hosts file from existing system user?
The line reading:
GIT_SSH_COMMAND='ssh -i ...'
is intended to provide an ssh key and several ssh options to the ssh command when Git—or more specifically, git fetch—uses ssh to call up another Git at an ssh-based URL.
This line is defective (or another line is missing), because as written, it sets the variable without also exporting it into the environment for git fetch. If the variable already exists in the environment, this particular defect is not a problem, since already-exported variables continue to be exported; but, if as is the more typical case, the variable does not exist yet, this just creates the variable locally.
There are two different ways to fix it: either put the variable-setting in front of the command itself, all on one logical line, as in:
GIT_SSH_COMMAND='ssh ...' git fetch
Or, add an export command, either on the line that sets the variable, or shortly afterward:
export GIT_SSH_COMMAND='...'
or:
GIT_SSH_COMMAND='...'
export GIT_SSH_COMMAND
Note that setting the variable on the same line as the command means to set it in the environment of that particular command, but not any longer than that. Setting it with an explicit export means to set it now and keep it set that way until it is changed, or the shell exits, whichever occurs first:
$ USER=hello sh -c 'echo $USER'
hello
$ echo $USER
torek
$ export USER=hello
$ sh -c 'echo $USER'
hello
$ echo $USER
hello

ssh and chroot followed by cd in shell

How to excute cd command after chroot to a remote node in shell script?
For ex:
I need this.
ssh remote-node "chroot-path cd command here; extra commands"
Without chroot it works fine, If I put the command list in another shell script and execute the shell script after chroot it seems to run okay.
But chroot seems to break cd?
Use printf %q to have your local shell (which must be bash) give you correct quoting that works, and bash -c to explicitly invoke a remote shell compatible with that quoting (as %q can generate bash-only quoting with input strings that contain special characters) under your chroot.
cmd_str='cd /to/place; extra commands'
remote_command=( bash -c "$cmd_str" )
printf -v remote_command_str '%q ' "${remote_command[#]}"
ssh remote-node "chroot /path/here $remote_command_str"
The bash -c is necessary because cd is a shell construct, and chroot directly exec's its arguments (with no shell) by default.
The printf %q and correct (single-quote) quoting for cmd_str ensures that the command string is executed by the final shell (the bash -c invoked under the chroot), not your local shell, and not by the remote pre-chroot shell.
Assuming by chroot-path you mean chroot /some/root/path.
chroot only takes a single command and cd isn't a command it is a shell built-in so that won't work.
Additionally only cd command here is being run (or attempted to) under the chroot setup. Everything after the ; is running in the main shell.
A script is the easiest way to do what you want.

Source environment variables and execute bash before running local script on remote machine [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm trying to execute the remote local script with ssh connection. I've read a document about the syntax of it. But my issue is that, before running the script, I need to execute bash and source environment variables.
This looks appropriate for me but it has not a source command :
ssh [user]#[server] 'bash -s' < [local_script]
I've tried such a thing with EOF but it didn't work for me too :
#!/bin/bash
/usr/bin/ssh "$user#$$host" <<EOF
bash -s
source /dir/to/profile/.profile
source /dir/to/env/set/env.sh
/path/to/script/script.sh stop
EOF
Do you have an idea for this type of implementation of remote commands ? I have to source profile before the environment settings otherwise it gives an exception. But the main problem is about source.
Maybe it was an easy question but I don't have any ideas. Thank you in advance for your all answers.
eval can accomplish this for you:
eval $(cat /path/to/environment) ./script.sh
You can source multiple files this way too if you want if you know there
path:
eval $(cat /path/to/environment1 /path/to/environment2) ./script.sh
Or iterate over a directory:
eval $(cat $(find -type f /path/to/environments)) ./script.sh
Stick SSH in front of it if you're doing this remotely to solve your specific problem:
# note the quotes otherwise we'll source our local environment
ssh user#host "'eval $(cat /path/to/environment)' ./remote_script.sh"
# If it's a local environment you want to sort, then do the same
# command without the quotes:
ssh user#host "eval $(cat /path/to/environment)" ./remote_script.sh
If you want to source a remote environment into your own then use eval
locally as so:
eval "$(ssh user#host cat /path/to/environment)" ./local_script.sh
This alls you to source an external file setting it's environment variables in the same forked instance that will calls your script (making them available).
Consider a script file that looks like this:
#!/bin/sh
echo "$VAR1"
echo "$VAR2"
test_function
Now consider your environment file looks like this:
# Environment Variables
VAR1=foo
VAR2=bar
test_function()
{
echo "hello world"
}
You'd see the output if you use the eval example:
foo
bar
hello world
Alternatively, if you just open up your script you wrote, you can source
these environment variables directly from within it and then you can just
call the script normally without any tricks:
#!/bin/sh
# Source our environment by starting with period an then following
# through with the full path to the environment file. You can also use
# the 'source' keyword here too instead of the period (.).
. /path/to/environment
echo "$VAR1"
echo "$VAR2"
test_function
I know it is old but just wanted to add that it can be done without an extra file - use '\' to escape local variables and remote command substitution - ie:
ssh me#somehost "RMTENV=\$(ls /etc/profile) && source \$RMTENV"
I use this to execute remote java commands and need the ENV to find java.
I fixed the problem by writing another template script that sources the environment variables and runs the script:
PROFILE=/dir/to/profile/.profile
source $PROFILE
cd /dir/to/script
/bin/bash script $1
If you use the source command with bash shell, #!/bin/bash doesn't work for the source command.

Problems using scala to remotely issue commands via ssh

I have a problem with scala when I want to create a directory remotely via ssh.
ssh commands via scala, such as date or ls, work fine.
However, when I run e.g
"ssh user#Main.local 'mkdir Desktop/test'".!
I get: bash: mkdir Desktop/test: No such file or directory
res7: Int = 127
When I copy-paste the command into my shell it executes without any problems.
Does anybody know what is going on??
EDIT:
I found this post : sbt (Scala) via SSH results in command not found, but works if I do it myself
However, the only thing I could take away from it is to use the full path for the directory to be created. However, it still does not work :(
Thanks!
ssh doesn't require that you pass the entire command line you want to run as a single argument. You're allowed to pass it multiple arguments, one for the command you want to run, and more for any arguments you want to pass that command.
So, this should work just fine, without the single quotes:
"ssh user#Main.local mkdir Desktop/test"
This shows how to get the same error message in an ordinary bash shell, without involving ssh or Scala:
bash-3.2$ ls -d Desktop
Desktop
bash-3.2$ 'mkdir Desktop/test'
bash: mkdir Desktop/test: No such file or directory
bash-3.2$ mkdir Desktop/test
bash-3.2$
For your amusement, note also:
bash-3.2$ mkdir 'mkdir Desktop'
bash-3.2$ echo echo foo > 'mkdir Desktop'/test
bash-3.2$ chmod +x 'mkdir Desktop'/test
bash-3.2$ 'mkdir Desktop/test'
foo
UPDATE:
Note that both of these work too:
Process(Seq("ssh", "user#Main.local", "mkdir Desktop/test")).!
Process(Seq("ssh", "user#Main.local", "mkdir", "Desktop/test")).!
Using the form of Process.apply that takes a Seq removes one level of ambiguity about where the boundaries between the arguments lie. But note that once the command reaches the remote host, it will be processed by the remote shell which will make its own decision about where to put the argument breaks. So for example if you wanted to make a directory with a space in the name, this works locally:
Process(Seq("mkdir", "foo bar")).!
but if you try the same thing remotely:
Process(Seq("ssh", "user#Main.local", "mkdir", "foo bar")).!
You'll get two directories named foo and bar, since the remote shell inserts an argument break.

Resources