I'm trying to use GNU Parallel in order to run a script with several parameters on a remote machines.
This looks somewhat like:
parallel --onall -S remote-machine /shared/location/script.sh ::: param_a param_b
/shared/location/script.sh is using git. So I get this error:
git: command not found
If I log in to remote-machine manually and run /shared/location/script.sh with param_a or param_b everything works fine. So I checked the $PATH variable and found out that if I run something on remote-machine using GNU parallel, it looks like PATH=/usr/bin:/bin:/usr/sbin:/sbin. If I run it directly from the machine it has also /local/bin/git.
Why is it that way and how to overcome?
Thanks in advance
GNU Parallel uses ssh for remote execution, so the $PATH is the same as you would see when you run a non-interactive ssh session:
ssh server echo '$PATH'
parallel -S server --onall {} '$PATH' ::: echo
The reason you see a different $PATH when you log in is that interactive sessions may set another $PATH.
You can force parallel to copy an environment variable using --env:
parallel --env PATH -S server --onall {} '$PATH' ::: echo
Related
I have a Bash (ver 4.4.20(1)) script running on Ubuntu (ver 18.04.6 LTS) that generates an SCP error. Yet, when I run the offending command on the command line, the same line runs fine.
The script is designed to SCP a file from a remote machine and copy it to /tmp on the local machine. One caveat is that the script must be run as root (yes, I know that's bad, this is a proof-of-concept thing), but root can't do passwordless SCP in my enviroment. User me can so passwordless SCP, so when root runs the script, it must "borrow" me's public SSH key.
Here's my script, slightly abridged for SO:
#!/bin/bash
writeCmd() { printf '%q ' "$#"; printf '\n'; }
printf -v date '%(%Y%m%d)T' -1
user=me
host=10.10.10.100
file=myfile
target_dir=/path/to/dir/$date
# print command to screen so I can see what is being submitted to OS:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
Output is:
su - me -c scp-Cme#10.10.10.100://.txt/tmp/.
It looks like the ' ' character are not being printed, but for the moment, I'll assume that is a display thing and not the root of the problem. What's more serious is that I don't see my variables in the actual SCP command.
What gives? Why would the variables be ignored? Does the su part of the command interfere somehow? Thank you.
(NOTE: This post has been reedited from its earlier form, if you wondering why the below comments seem off-topic.)
When you run:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
you'll see that its output is (something equivalent to -- may change version-to-version):
su - me -c scp\ -C\ me#\$host:/\$target_dir/\$file.txt\ /tmp/.
Importantly, none of the variables have been substituted yet (and they're emitted escaped to show that they won't be substituted until after su runs).
This is important, because only variables that have been exported -- becoming environment variables instead of shell variables -- survive a process boundary, such as that caused by the shell starting the external su command, or the one caused by su starting a new and separate shell interpreter as the target user account. Consequently, the new shell started by su doesn't have access to the variables, so it substitutes them with empty values.
Sometimes, you can solve this by exporting your variables: export host target_dir file, and if su passes the environment through that'll suffice. However, that's a pretty big "if": there are compelling security reasons not to pass arbitrary environment variables across a privilege boundary.
The safer way to do this is to build a correctly-escaped command with the variables already substituted:
#!/usr/bin/env bash
# ^^^^- needs to be bash, not sh, to work reliably
cmd=( scp -C "me#$host:/$target_dir/$file.txt" /tmp/. )
printf -v cmd_v '%q ' "${cmd[#]}"
su - me -c "$cmd_v"
Using printf %q is protection against shell injection attacks -- ensuring that a target_dir named /tmp/evil/$(rm -rf ~) doesn't delete your home directory.
I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar
MATLAB runs on a host machine. By using the 'system' call and CYGWIN I have to run some applications on a remote system based on linux.
The problem is, after calling the SSH command the other commands are ignored
so
system('C:\cygwin64\bin\bash -l -c "ssh -t -t 10.0.0.127; cd /home/superuser/MAGIC_PATH")
does not work
So I tried to change the directory sequentially after the SSH-Connections, but now the MATLAB-script is blocked. And I have to type the command manually. Which is not the desired solution
In MATLAB:
cygwin_path='C:\cygwin64\bin\bash';
binary_path='/home/superuser/MAGIC_PATH';
SSH_string=sprintf('%s -l -c "ssh -t -t %s &"',cygwin_path,remote_IP)
ChangeDIR_string=sprintf('%s -l -c "cd /home/superuser/"',cygwin_path)
So how can I change my code respectively the system call, so that it automatically runs multiple commands and starts some applications (as background jobs)
In the case when I first ssh to the server and then run command, it executes successfully
root#chef:~# chef-solo -v
Chef: 11.10.0
But when I try to run it like this
ssh root#188.xxx.xxx.xxx -t -C "chef-solo -c /var/chef/solo.rb"
I receive an error:
bash: chef-solo: command not found
Why is this happening, and how can I solve this issue ?
It is still matter of $PATH and ssh - not chef-solo. Interactive and non-interactive sessions not necessarily have same value for the $PATH variable. Same ssh problem is described here on stackoverflow. You may also check GNU bash manual to have deeper insight of (non-)interactive and (non-)login shells. To shorten, solution would be one of the following:
Run chef-solo using absolute path. Here's how your command might look like:
ssh root#188.xxx.xxx.xxx -t -C "/usr/local/ruby/bin/chef-solo -c /var/chef/solo.rb"
Tune the .bash configuration files to load same $PATH variable for both interactive and non-interactive shells.
Note: To find out what's the absolute path, login to the machine via ssh and run which chef-solo (Don't know how experienced you are with linux. Sorry if I'm underestimating your knowledge)
After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname