Bash Script can't access proper environment variables in GCP Instance - bash

I have tried everything to execute a manually installed command in a bash script that normally executes fine in my user shell (yigit#instance-1). I'm thinking that GCP Instances can't access proper env variables.
The command that I've installed called Task Spooler and executed as ts in shell. I setup the tar package using Makefile in following paths (by make install):
ts is /usr/local/bin/ts
ts is /usr/bin/ts
ts is /bin/ts
So my shell script is as follows:
#!/bin/bash
echo $PATH
ts python3 somepyscript.py
By looking output of PATH env, it seems there isn't any mismatching of environment variables to access the command. However output comes to me as:
/home/yigit/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
./tm_pipeline.sh: line 10: ts: command not found
As said, the command works fine in my user shell that I connected over SSH. Can't understand why this is happening in VM Instance... I know GCP offers start-up scripts to VMs in settings and regarding containerization applications via Cloud Build etc. Could it be a problem about the interference with these or is there something I can do .. ? Thanks for any help in advance.

Related

shell script behaves differently when called using paramiko vs interactive shell [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

pycharm: run python program after executing bash script

My python program expects a few environment variables to be populated first (API KEYs etc.)
I have a bash script that fetches the temporary API keys, and sets it using EXPORT, and i'm able to call the script successfully.
If i use pycharm's run option, the environment variables are understandably missing.
Other than setting the Env variables manually in the pycharm configuration , can i ask it to run the bash script and then execute my python file in the same bash context ?
I have a script populate_env available in my PATH.

SSH heredoc to run Perl script on another server can't find right paths

I have a Perl program on server_B which uses Perl DBI and 5.010 and runs fine from the server_B terminal. I run it from a shell script which first prepares some arguments and then passes them to the Perl program, all works fine.
I need to run a shell script on server_A that will execute that script on server_B. This is because the Perl program creates several files that I want to SFTP back over to server_A. This is the script I'm running on server_A:
ssh server_B <<- EOF
perl/update.sh
EOF
There is some strange behavior which I'm trying to understand:
The script (update.sh) on server_B runs mysql, which is not installed on server_A (which is why I have to do this whole thing.) If I try to run it on server_B as-is, I can call mysql just like that. But when I run the above script (on server_A) to ssh into server_B and run that script, it doesn't recognize mysql unless I change the file (on server_B) to call the full path /opt/mysql/client/bin/mysql (even though that file is already on server_B with mysql installed) Does this mean server_B is picking up on the PATH variable from server_A instead of using my PATH variable from server_B? Is it trying to run my programs from server_A on the files on server_B? How and why??
If I make the change above it executes the script, but when it hits Perl it says
Perl v5.10.0- required - this is only v5.8.8
Again, 5.10 works fine on server_B but the version of Perl on server_A is 5.8.8.
So I got rid of use 5.010; because it actually wasn't necessary, but then I have a similar problem with my modules (DBI and DBD::mysql). I get:
Can't locate DBI.pm in #INC (#INC contains.. [my Perl PATH from server_A])
at perlfile.pl line 4
I was expecting the ssh heredoc call to update.sh (from server_A) to run exactly as update.sh does if I call it on server_B, but instead it seems like it's trying to use my programs from server_A on server_B, which I find weird. Can anyone help me understand why it's happening? I feel like I'm misunderstanding something fundamental about how ssh works.
server_A is AIX with ksh
server_B is AIX with bash
Edit - since some of you voted to the effect that I haven't done my research, here's what else I've tried. I didn't mention because I don't understand them fully, these are just guesses based on other SO posts & hunches. It'd be disingenuous if I gave the impression I knew what I was talking about.
If this is a duplicate, which question should I be looking at? If this is a "just read the manual situation", which one? What should I look for?
Read man ssh looking for clues related to environment variables, didn't find anything I understood
Tried running with -t
Tried running with -t -t
Did log in remotely with ssh and manually running it - this DOES work
Sourced my .bash_profile in the update script
Tried to re-assign PATH as the remote server's PATH when ssh
Tried using a different delimiter for the heredoc
Tried < instead of <<
Tried without the "-"
Edit 2 with Saigo's help below I determined that when in interactive ssh, if I echo $PATH I do get the target server's $PATH, but in a shell script I don't. That led me to this:
https://serverfault.com/questions/643333/different-bash-path-variables-when-using-ssh-script-vs-interactive-ssh
where I found out that scripted ssh doesn't call .bashrc, but interactive ssh does. So it looks like I was on the right track trying to source .bash_profile inside the scripted SSH heredoc, just need .bashrc not .bash_profile - however I don't have a .bashrc on the target server. I do have .profile but when I source that, I get an error stating it's for interactive bash sessions only. So now I'm just trying to find whatever file would contain my $PATH variable because it's apparently not .bashrc as there isn't one in there.
Edit 3 - tried hard-coding the PATH variable into a file and sourcing that and even then when I echo $PATH I get the origin server's PATH variable. It is reading the file in correctly, I also assigned another test variable and echoed that as part of the script. I tried sourcing /etc/profile and no luck.
I found a solution that works perfectly. I wasn't able to get it to work with ~/.bashrc, ~/.bash_profile, or ~/.ssh/rc but still not sure why it's not picking up my environment variables even with sourcing these.
Since it works when I manually ssh in and then run the commands one-by-one, I used these arguments to run ssh in a forced interactive login.
ssh server_B bash --login -i "~/perl/update.sh"
See these for more:
https://superuser.com/questions/564926/profile-is-not-loaded-when-using-ssh-ubuntu
https://unix.stackexchange.com/questions/46143/why-bash-unable-to-find-command-even-if-path-is-specified-properly
Hope this is useful for someone in the future. Thank you for your assistance Saigo.

Source a script remotly via ssh

I want to run a remote program via ssh which requires a certain environment. Thus before executing the program I source a specific file building up the environment. If I'm logged onto the machine directly this is no problem but when I execute the command via ssh
#!/bin/bash
foo=`ssh user#host "source ~/script.sh; ~/run/program"`
I get an error that indicates that the script was not sourced correctly. Do you know what I have to do in order to get the script sourced and the program executed in the same session?
EDIT:
I'm exporting the LD_LIBRARY_PATH with the script and the executable is complaining that it cannot find the shared object file. The default shell is bash. 'Session' is definitive not the right wording. I meant 'terminal environment'.
This may not be the cleanest way, but if you invoke bash with the interactive option (-i) and send commands through the standard input, it should work.
In particular,
foo=`ssh user#host bash -i <<EOF
source ~/script.sh
~/run/program
EOF`
It would be much easier if you have a script program_in_env.sh that does exactly the two steps you want:
#!/bin/bash
source ~/script.sh
~/run/program
Then you would just need to call ssh user#host program_in_env.sh.
Good luck.
Thank you for all your time and help. I found the issue. The basic idea of how to execute the remote program was right from the beginning. When testing my case locally on the machine, the current working directory was different. For some reason the cwd is important when sourcing this bash script.

Hudson slave using msysGit shell

I have got a Hudson Slave on a windows machine and need to execute some shell commands on it. I have put all the commands in the execute shell portion of the project and the first line reads as this:
#!C:\msysgit\msysgit\bin\sh.exe
However when running the project I am getting errors saying command not found. Specifically for git, cd, make, rm and I would presume more. I feel like this is a simple thing to fix but can't figure it out. The script works fine when using msysGit on the machine but I am having trouble doing it through Hudson. Any help would be appreciated. I need to be using msysGit not Cygwin.
You could start by making that script displays:
the username
the path
the $HOME (important for ssh operation, and not always set correctly on Windows)
And see what those variables reference in the context of an Hudson job.
They might not reflect/inherit the values of your current session.
And it can depend on how you did install msysgit.
The OP Zack Lalanne mentions in the comment he just need to have:
#!C:\msysgit\bin\sh.exe --login -i
which means the bash session will inherit his environment variables, making the job much likely to run than without the user's environment.

Resources