ssh and chroot followed by cd in shell - shell

How to excute cd command after chroot to a remote node in shell script?
For ex:
I need this.
ssh remote-node "chroot-path cd command here; extra commands"
Without chroot it works fine, If I put the command list in another shell script and execute the shell script after chroot it seems to run okay.
But chroot seems to break cd?

Use printf %q to have your local shell (which must be bash) give you correct quoting that works, and bash -c to explicitly invoke a remote shell compatible with that quoting (as %q can generate bash-only quoting with input strings that contain special characters) under your chroot.
cmd_str='cd /to/place; extra commands'
remote_command=( bash -c "$cmd_str" )
printf -v remote_command_str '%q ' "${remote_command[#]}"
ssh remote-node "chroot /path/here $remote_command_str"
The bash -c is necessary because cd is a shell construct, and chroot directly exec's its arguments (with no shell) by default.
The printf %q and correct (single-quote) quoting for cmd_str ensures that the command string is executed by the final shell (the bash -c invoked under the chroot), not your local shell, and not by the remote pre-chroot shell.

Assuming by chroot-path you mean chroot /some/root/path.
chroot only takes a single command and cd isn't a command it is a shell built-in so that won't work.
Additionally only cd command here is being run (or attempted to) under the chroot setup. Everything after the ; is running in the main shell.
A script is the easiest way to do what you want.

Related

Bash :: SU command removes Variables from SCP Command?

I have a Bash (ver 4.4.20(1)) script running on Ubuntu (ver 18.04.6 LTS) that generates an SCP error. Yet, when I run the offending command on the command line, the same line runs fine.
The script is designed to SCP a file from a remote machine and copy it to /tmp on the local machine. One caveat is that the script must be run as root (yes, I know that's bad, this is a proof-of-concept thing), but root can't do passwordless SCP in my enviroment. User me can so passwordless SCP, so when root runs the script, it must "borrow" me's public SSH key.
Here's my script, slightly abridged for SO:
#!/bin/bash
writeCmd() { printf '%q ' "$#"; printf '\n'; }
printf -v date '%(%Y%m%d)T' -1
user=me
host=10.10.10.100
file=myfile
target_dir=/path/to/dir/$date
# print command to screen so I can see what is being submitted to OS:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
Output is:
su - me -c scp-Cme#10.10.10.100://.txt/tmp/.
It looks like the ' ' character are not being printed, but for the moment, I'll assume that is a display thing and not the root of the problem. What's more serious is that I don't see my variables in the actual SCP command.
What gives? Why would the variables be ignored? Does the su part of the command interfere somehow? Thank you.
(NOTE: This post has been reedited from its earlier form, if you wondering why the below comments seem off-topic.)
When you run:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
you'll see that its output is (something equivalent to -- may change version-to-version):
su - me -c scp\ -C\ me#\$host:/\$target_dir/\$file.txt\ /tmp/.
Importantly, none of the variables have been substituted yet (and they're emitted escaped to show that they won't be substituted until after su runs).
This is important, because only variables that have been exported -- becoming environment variables instead of shell variables -- survive a process boundary, such as that caused by the shell starting the external su command, or the one caused by su starting a new and separate shell interpreter as the target user account. Consequently, the new shell started by su doesn't have access to the variables, so it substitutes them with empty values.
Sometimes, you can solve this by exporting your variables: export host target_dir file, and if su passes the environment through that'll suffice. However, that's a pretty big "if": there are compelling security reasons not to pass arbitrary environment variables across a privilege boundary.
The safer way to do this is to build a correctly-escaped command with the variables already substituted:
#!/usr/bin/env bash
# ^^^^- needs to be bash, not sh, to work reliably
cmd=( scp -C "me#$host:/$target_dir/$file.txt" /tmp/. )
printf -v cmd_v '%q ' "${cmd[#]}"
su - me -c "$cmd_v"
Using printf %q is protection against shell injection attacks -- ensuring that a target_dir named /tmp/evil/$(rm -rf ~) doesn't delete your home directory.

Shell Script that does chroot and execute commands in chroot

If in shell script I write
chroot /home/mayank/chroot/codebase
cd SBC
when I run this shell script It does go in the chroot but does not execute the command cd SBC,
when I exit chroot then it executes cd SBC.
How can I achieve something that does chroot and execute commands in chroot through shell script ??
When you run chroot without telling it what to do, it will try to start chrooted interactive shell session. So your script would "pause" at that point and when you are done with that interactive shell session, it continues out of chroot again.
One of the quick and dirt options would be to abuse here-document, like this:
chroot /home/mayank/chroot/codebase /bin/bash <<"EOT"
cd /tmp/so
ls -l
echo $$
EOT
Which takes all lines up to EOT and feeds them into bash started through chroot. Those double quotes around "EOT" should ensure bash passes the content not trying to expand variables and such. Hence that echo $$ should be PID of the inner chrooted bash.
somewhat I found a solution,
chroot /work3/tmp_GU/$build_env/sbcbuild/chroot ./test.sh
after chroot giving a script there is working fine for me.
test.sh present in the chroot folder.
All commands in test.sh will be executed in chroot folder.
So basically giving a command after chroot
man chroot
chroot [OPTION] NEWROOT [COMMAND [ARG]...]

Append to a remote environment variable for a command started via ssh on RO filesystem

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Shell script using with tmux fails on zsh

I have following script:
#!/usr/bin/env bash
# set -xv
tmux new-window -n 'foo' 'source "$HOME/.rvm/scripts/rvm"; sleep 123' \;
On one machine it works perfectly, on the second I got an error:
sh: 1: source: not found
Ofcourse running command from shell works perfectly.
What is wrong? Machines have similar dot files....
source is not a POSIX command. Use . instead. The machine that fails is probably using dash as the system shell, not bash. The fact that tmux is executed from a bash script does not mean bash is used to execute the command given to new-window. tmux will use the system shell /bin/sh, so the command should not rely on non-POSIX features like the source synonym for ..

How to force ssh to execute bash instead of the user default on the remote machine?

I want to execute a bash script with ssh but when I try this it's using ksh which is the user's default shell.
I can't change that default.
So, how can I trick ssh to execute my script with bash instead of the default shell?
Make this the first line of your script:
#!/usr/bin/env bash
Edit: As per this, the utility of /usr/bin/env is dubious. So, you probably want:
#!/bin/bash
Replace /bin/bash with the actual path of bash executable.
You can call your script explicitly with bash:
ssh <ssh-opts> bash <scriptname>
This way there will be a ksh executed at login, but inside ksh you start a bash executing your script.

Resources