shell script behaves differently when called using paramiko vs interactive shell [duplicate] - bash

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)

The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

Related

Bash Script can't access proper environment variables in GCP Instance

I have tried everything to execute a manually installed command in a bash script that normally executes fine in my user shell (yigit#instance-1). I'm thinking that GCP Instances can't access proper env variables.
The command that I've installed called Task Spooler and executed as ts in shell. I setup the tar package using Makefile in following paths (by make install):
ts is /usr/local/bin/ts
ts is /usr/bin/ts
ts is /bin/ts
So my shell script is as follows:
#!/bin/bash
echo $PATH
ts python3 somepyscript.py
By looking output of PATH env, it seems there isn't any mismatching of environment variables to access the command. However output comes to me as:
/home/yigit/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
./tm_pipeline.sh: line 10: ts: command not found
As said, the command works fine in my user shell that I connected over SSH. Can't understand why this is happening in VM Instance... I know GCP offers start-up scripts to VMs in settings and regarding containerization applications via Cloud Build etc. Could it be a problem about the interference with these or is there something I can do .. ? Thanks for any help in advance.

ssh does not find /usr/local/bin path

If I log in to my remote Mac via ssh -p22 jenkins#192.168.2.220 and type docker, it finds the executable because it also finds the path /usr/local/bin if I check with echo $PATH. But if I do the same in a heredoc inside a file setup-mac.sh like
#!/bin/bash
ssh jenkins#192.168.2.220 '/bin/bash -s' << 'EOF'
"echo $PATH"
"bash run-docker.sh"
EOF
which I execute via shell and bash setup-mac.sh it does not find /usr/local/bin in PATH and consequently does not run docker, because the command is unknown.
On the remote Mac, there is a file run-docker.sh which is a bash file that calls docker commands, and it works if called locally.
To solve this issue, I've enabled PermitUserEnvironment on the mac in sshd_config, but this did not work. Though I only restarted ssh service and not the whole machine. Meanwhile I've changed all docker commands on the remote run-docker.sh script to an alias ${DOCKER} and I initialize it at the beginning of the script to DOCKER=/usr/local/bin/docker, but this is only a workaround.
I guess that the problem is occurring because /usr/local/bin is being added to the PATH by the 'jenkins' user's personal initialization file (~/.bashrc). That is run only by interactive shells, and the shell run by ssh ... '/bin/bash -s' ... is not interactive.
You could try forcing the shell to be interactive by invoking it with /bin/bash -i -s, but that is likely to cause other problems. (The shell may try and fail to set up job control. The value of PS1 may appear in outputs. ...)
In general, you can't rely on the PATH being set correctly for programs. See Setting the PATH in Scripts - Scripting OS X for a thorough analysis of the problem. Correct way to use Linux commands in bash script is also relevant, but doesn't have much information.
A simple and reliable way to fix the problem permanently is to set the required PATH explicitly at the start of run-docker.sh. For example:
export PATH=/bin:/usr/bin:/usr/local/bin
You may need to add other directories to the path if run-docker.sh runs programs that are in other locations.
Another solution to the problem is to use full paths to commands in code. However, that makes the code more difficult to read, more difficult to maintain, and more difficult to test. Setting a safe PATH is usually a better option.

SSH heredoc to run Perl script on another server can't find right paths

I have a Perl program on server_B which uses Perl DBI and 5.010 and runs fine from the server_B terminal. I run it from a shell script which first prepares some arguments and then passes them to the Perl program, all works fine.
I need to run a shell script on server_A that will execute that script on server_B. This is because the Perl program creates several files that I want to SFTP back over to server_A. This is the script I'm running on server_A:
ssh server_B <<- EOF
perl/update.sh
EOF
There is some strange behavior which I'm trying to understand:
The script (update.sh) on server_B runs mysql, which is not installed on server_A (which is why I have to do this whole thing.) If I try to run it on server_B as-is, I can call mysql just like that. But when I run the above script (on server_A) to ssh into server_B and run that script, it doesn't recognize mysql unless I change the file (on server_B) to call the full path /opt/mysql/client/bin/mysql (even though that file is already on server_B with mysql installed) Does this mean server_B is picking up on the PATH variable from server_A instead of using my PATH variable from server_B? Is it trying to run my programs from server_A on the files on server_B? How and why??
If I make the change above it executes the script, but when it hits Perl it says
Perl v5.10.0- required - this is only v5.8.8
Again, 5.10 works fine on server_B but the version of Perl on server_A is 5.8.8.
So I got rid of use 5.010; because it actually wasn't necessary, but then I have a similar problem with my modules (DBI and DBD::mysql). I get:
Can't locate DBI.pm in #INC (#INC contains.. [my Perl PATH from server_A])
at perlfile.pl line 4
I was expecting the ssh heredoc call to update.sh (from server_A) to run exactly as update.sh does if I call it on server_B, but instead it seems like it's trying to use my programs from server_A on server_B, which I find weird. Can anyone help me understand why it's happening? I feel like I'm misunderstanding something fundamental about how ssh works.
server_A is AIX with ksh
server_B is AIX with bash
Edit - since some of you voted to the effect that I haven't done my research, here's what else I've tried. I didn't mention because I don't understand them fully, these are just guesses based on other SO posts & hunches. It'd be disingenuous if I gave the impression I knew what I was talking about.
If this is a duplicate, which question should I be looking at? If this is a "just read the manual situation", which one? What should I look for?
Read man ssh looking for clues related to environment variables, didn't find anything I understood
Tried running with -t
Tried running with -t -t
Did log in remotely with ssh and manually running it - this DOES work
Sourced my .bash_profile in the update script
Tried to re-assign PATH as the remote server's PATH when ssh
Tried using a different delimiter for the heredoc
Tried < instead of <<
Tried without the "-"
Edit 2 with Saigo's help below I determined that when in interactive ssh, if I echo $PATH I do get the target server's $PATH, but in a shell script I don't. That led me to this:
https://serverfault.com/questions/643333/different-bash-path-variables-when-using-ssh-script-vs-interactive-ssh
where I found out that scripted ssh doesn't call .bashrc, but interactive ssh does. So it looks like I was on the right track trying to source .bash_profile inside the scripted SSH heredoc, just need .bashrc not .bash_profile - however I don't have a .bashrc on the target server. I do have .profile but when I source that, I get an error stating it's for interactive bash sessions only. So now I'm just trying to find whatever file would contain my $PATH variable because it's apparently not .bashrc as there isn't one in there.
Edit 3 - tried hard-coding the PATH variable into a file and sourcing that and even then when I echo $PATH I get the origin server's PATH variable. It is reading the file in correctly, I also assigned another test variable and echoed that as part of the script. I tried sourcing /etc/profile and no luck.
I found a solution that works perfectly. I wasn't able to get it to work with ~/.bashrc, ~/.bash_profile, or ~/.ssh/rc but still not sure why it's not picking up my environment variables even with sourcing these.
Since it works when I manually ssh in and then run the commands one-by-one, I used these arguments to run ssh in a forced interactive login.
ssh server_B bash --login -i "~/perl/update.sh"
See these for more:
https://superuser.com/questions/564926/profile-is-not-loaded-when-using-ssh-ubuntu
https://unix.stackexchange.com/questions/46143/why-bash-unable-to-find-command-even-if-path-is-specified-properly
Hope this is useful for someone in the future. Thank you for your assistance Saigo.

Source a script remotly via ssh

I want to run a remote program via ssh which requires a certain environment. Thus before executing the program I source a specific file building up the environment. If I'm logged onto the machine directly this is no problem but when I execute the command via ssh
#!/bin/bash
foo=`ssh user#host "source ~/script.sh; ~/run/program"`
I get an error that indicates that the script was not sourced correctly. Do you know what I have to do in order to get the script sourced and the program executed in the same session?
EDIT:
I'm exporting the LD_LIBRARY_PATH with the script and the executable is complaining that it cannot find the shared object file. The default shell is bash. 'Session' is definitive not the right wording. I meant 'terminal environment'.
This may not be the cleanest way, but if you invoke bash with the interactive option (-i) and send commands through the standard input, it should work.
In particular,
foo=`ssh user#host bash -i <<EOF
source ~/script.sh
~/run/program
EOF`
It would be much easier if you have a script program_in_env.sh that does exactly the two steps you want:
#!/bin/bash
source ~/script.sh
~/run/program
Then you would just need to call ssh user#host program_in_env.sh.
Good luck.
Thank you for all your time and help. I found the issue. The basic idea of how to execute the remote program was right from the beginning. When testing my case locally on the machine, the current working directory was different. For some reason the cwd is important when sourcing this bash script.

Plink does not source bashrc or bash_profile on connect

I am trying to use plink as an ssh alternative on windows, but I am finding that when plink connects to a remote linux machine, it does not source .bash_profile or .bashrc.
Is there a different dot file I should create? Or is there another option?
For example, my bashrc file adds a directory to my path. This directory contains extra programs that I want to use, one being python.
This will not work:
plink host python
Where as this will:
plink host "source .bashrc;python"
When I use plink without a command parameter, it sources .bash_profile and everything works fine, but it appear that by merely sending a command plink will not source either file.
Is there a workaround?
The accepted answer helped me solve the same problem using plink.
Here is a bit more detail that may help people in future:
When a ssh connection is made to run a single command using plink, bash is not invoked as an "interactive login shell", so it doesn't run /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile (see the bash manual pages).
For my purposes, I needed ~/.profile to run prior to the command passed in the plink command line.
A forced command can be added to the authorized_keys file for that key (see the sshd manual pages). A forced command (e.g. to run ~/.profile) stops it running the command specified by plink, so to get it to do both, the forced command should be to execute a script that runs .profile then the original plink command. The latter is stored in an environment variable $SSH_ORIGINAL_COMMAND so your script can do
source .profile
$SSH_ORIGINAL_COMMAND
and you specify the script in the ~/.ssh/authorized_keys file as follows, before the key, on the same line:
command="source forced_command.script" ssh-rsa A3AABzaC1yc...
If you simply connect to a remote host via ssh or plink, it will start the login account's default shell. If that shell is bash, bash will automatically source .bash_profile.
If you connect to a remote host via ssh or plink asking for a command to be executed, ssh will try to execute just that command.
What you want to achieve can be done by using a ForcedCommand. See also here:
https://serverfault.com/questions/162018/force-ssh-to-use-a-specific-shell/166129#166129 and
http://oreilly.com/catalog/sshtdg/chapter/ch08.html
Set the forced command to be a script that does 2 things:
source the .bash_profile
run original command (env vars $SSH_ORIGINAL_COMMAND or $SSH2_ORIGINAL_COMMAND)

Resources