Cannot use source to execute Bash script on zsh - bash

I have a Mac, on which I have installed and configured zsh to be my default shell. It has worked very well with simple Bash scripts. Until today.
I was trying to set up my AWS CLI access with MFA and used a script to do the same. Since I have multiple accounts, I used an aws_accounts file to store the account numbers with 400 permissions on it. (No real reason why, just felt like it and it works).
I then source this file and get to the part where I need to provide my MFA key, and those work too. The last step, where I export the access key, secret key and session token, works as long as the script is executing, but once done, echoing it shows a blank, because it now no longer has set it in the current shell.
I know the workaround is to do source aws_mfa_access default to run the script, but it doesn't work and I get a bad substitution error. I've tried various combinations of #!/usr/bin/env bash, #!/bin/bash and #!/usr/bin/env zsh, but to no avail. What's going on?
[aws_account_numbers]
default_account_number=<number>
sandbox_account_number=<number>
production_account_number=<number>
[aws_mfa_access]
#!/usr/bin/env bash
source aws_account_numbers
AWS_ENV="${1}"
AWS_ACCT_NBR="${AWS_ENV}"_account_number
#export your aws access key that matches with your account, username
export AWS_PROFILE="${AWS_ENV}"
#If there are existing environment variables set, this can cause issues so we unset them first
unset AWS_SESSION_TOKEN
unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
#Set the serial number of your MFA token
MFA="arn:aws:iam::${!AWS_ACCT_NBR}:mfa/codingincircles"
#Get the code from the MFA device
echo "Please enter the MFA code for the ${AWS_ENV} account: "
read -r code
#Get the credentials from AWS and store the response in a variable
creds="$( aws sts get-session-token --serial-number "${MFA}" --token-code "${code}" )"
#Parse the response into separate variables
AWS_ACCESS_KEY_ID="$( echo "${creds}" | jq -r .Credentials.AccessKeyId )"
AWS_SECRET_ACCESS_KEY="$( echo "${creds}" | jq -r .Credentials.SecretAccessKey )"
AWS_SESSION_TOKEN="$( echo "${creds}" | jq -r .Credentials.SessionToken )"
#Display the keys to the user for reference/confirm proper working
echo "${access_key}"
echo "${secret_key}"
echo "${session_token}"
#Set the appropriate environment variables
export AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
export AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
export AWS_SESSION_TOKEN="${AWS_SESSION_TOKEN}"
EDIT: Two things of note:
The parameter substitution is not an issue at all. It works just fine as is, though the other methods suggested in the comments and answer work too. (I used the jq -r tip and it works like a charm! Thank you!)
I removed the source command in my script (on line 3) and then was able to, without any errors of any kind, able to invoke my script as source aws_mfa_access default. The exported variables persisted and I am able to use the CLI with no problems.
So why does this not like me using source in my script? I've also edited the script to reflect some of the changes.

Indirect parameter expansion is different in zsh than it is in bash
MFA="arn:aws:iam::${(P)AWS_ACCT_NBR}:mfa/codingincircles"
However, you can avoid indirect parameter expansion in the first place by using an associative array to store the account numbers.
typeset -A account_numbers
account_numbers[default]=...
account_numbers[sandbox]=...
account_numbers[production]=...
Then
MFA="arn:aws:iam::${account_numbers[$1]}:mfa/codingincircles"
Finally, source is not an external command that means "execute a bash script here". It's a shell built-in that executes a file in the current shell. If the current shell is bash, it will attempt to execute the contents of a file as a bash script; if it's zsh, as a zsh script; if it's dash, as a dash script.

Related

Using Expect to fill a password in a bash script

I am relatively new to working in bash and one of the biggest pains with this script I have to run is that I get prompted for passwords repeatedly when running this script. I am unable to pass ssh keys or use any options except expect due to security restrictions but I am struggling to understand how to use expect.
Does Expect require a separate file from this script to call itself, it seems that way looking at tutorials but they seem rather complex and confusing for a new user. Also how do I input into my script that I want it to auto fill in any prompt that says Password: ? Also this script runs with 3 separate unique variables every time the script is called. How do I make sure that those are gathered but the password is still automatically filled?
Any assistance is greatly appreciated.
#!/bin/bash
zero=`echo $2`
TMPIP=`python bin/dgip.py $zero`
IP=`echo $TMPIP`
folder1=`echo $zero | cut -c 1-6`
folder2=`echo $zero`
mkdir $folder1
cd $folder1
mkdir $folder2
cd $folder2
scp $1#`echo $IP`:$3 .
Embedding expect code in an shell script is not too difficult. We have to be careful to get the quoting correct. You'll do something like this:
#!/usr/bin/env bash
user=$1
zero=$2
files=$3
IP=$(python bin/dgip.py "$zero")
mkdir -p "${zero:0:6}/$zero"
cd "${zero:0:6}/$zero"
export user IP files
expect <<<'END_EXPECT' # note the single quotes here!
set timeout -1
spawn scp $env(user)#$env(IP):$env(files) .
expect {assword:}
send "$env(my_password)\r"
expect eof
END_EXPECT
Before you run this, put your password into your shell's exported environment variables:
export my_password=abc123
bash script.sh joe zero bigfile1.tgz
bash script.sh joe zero bigfile2.tgz
...
Having said all that, public key authentication is much more secure. Use that, or get your sysadmins to enable it, if at all possible.

How to run code stored in environment variable explicitly

I want to store password which is being used by other script as a environment variable. These are stored in a separate file which is being sourced anytime when a new terminal window is being opened.
The thing is, that it is insecure, so I decided it to store in a Apple Keychain and prompt user to enter password.
THE MAIN PROBLEM: I don't want it to run when sourcing variables (on new term window), but explicitly - anytime I call echo '$NAME', then I want to run the function stored in that variable, not on new term window.
.bash_variables:
get_pw()
{
key=$1
security unlock-keychain
security find-generic-password -a ${USER} -s $key -w
}
export E_PASSWORD="$(get_pw E_PASSWORD)"
This file is being sourced in .bash_profile:
if [ -f ~/.bash_variables ]; then
. ~/.bash_variables
fi
While I wouldn't recommend this approach in any environment that needs to be secure, but you can use the DEBUG trap.
function set_pw() {
if echo $BASH_COMMAND | grep '${*E_PASSWORD}*'; then
E_PASSWORD="$(get_pw E_PASSWORD)"
else
E_PASSWORD=""
fi
}
trap set_pw DEBUG
Explanation:
the DEBUG trap is executed before any command is executed
$BASH_COMMAND is the command that is about to be executed
${*E_PASSWORD}* checks if the $BASH_COMMAND tries to access the $E_PASSWORD variable (note that it doesn't take in account non interpolated strings ('$E_PASSWORD')
the else branch "deletes" the value when is not used (security ??)
NOTE: this is just an example! I'm not a security expert, so I can't even begin to understand the implications!

Declare variable on unix server

I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.

Simple Bash Script Error and Advice - Saving Environment Variables in Linux

I am working on a project that is hosted in Heroku. The app is hard coded to use Amazon S3 and looks for the keys in environment variables. This is what I wrote after looking at some examples and I am not sure why its not working.
echo $1
if [ $1 != "unset" ]; then
echo "set"
export AMAZON_ACCESS_KEY_ID=XXXXXXXXXXXX
export AMAZON_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export S3_BUCKET_NAME=XXXXXXXXX
else
echo "unset"
export AMAZON_ACCESS_KEY_ID=''
export AMAZON_SECRET_ACCESS_KEY=''
export S3_BUCKET_NAME=''
fi
When running the script it goes to the set section. But when inspecting through echo $AMAZON_ACCESS_KEY_ID # => ''.
I am not sure what is causing the issue. I will be interested in...
A fix for this...
A way to extract and add heroku config variables in the the env in an easier way.
You need to source the script, not run it as a child. If you run the script directly, its environment disappears when it ends. Sourcing the script causes it to be executed in the current environment. help source for more information.
Example:
$ VAR=old_value
$ cat script.sh
#!/bin/bash
export VAR=new_value
$ ./script.sh
$ echo $VAR
old_value
$ source script.sh
$ echo $VAR
new_value
Scripts executed with source don't need to be executable nor do they need the "shebang" line (#!/bin/bash) because they are not run as separate processes. In fact, it is probably a good idea to not make them executable in order to avoid them being run as commands, since that won't work as expected.

Pass a variable in a shell script

I'm new to Unix...I have a shell script that calls sqlplus. I have some variables that are defined within the code. However, I do not feel comfortable having the password displayed within the script. I would appreciate if someone could show me ways on how to hide my password.
One approach I know of is to omit the password and sqlplus will
prompt you for the password.
An approach that I will very much be interested in is a linux
command whose output can be passed into the password variable. That
way, I can replace easily replace "test" with some parameter.
Any other approach.
Thanks
#This is test.sh It executes sqlplus
#!/bin/sh
export user=TestUser
export password=test
# Other variables have been ommited
echo ----------------------------------------
echo Starting ...
echo ----------------------------------------
echo
sqlplus $user/$password
echo
echo ----------------------------------------
echo finish ...
echo ----------------------------------------
You can pipe the password to the sqlplus command:
echo ${password} | sqlplus ${user}
tl;dr: passwords on the command line are prone to exposure to hostile code and users. don't do it. you have better options.
the command line is accessible using $0 (the command itself) through ${!#} ($# is the number of arguments and ${!name} dereferences the value of $name, in this case $#).
you may simply provide the password as a positional argument (say, first, or $1), or use getopts(1), but the thing is passwords in the arguments array is a bad idea. Consider the case of ps auxww (displays full command lines of all processes, including those of other users).
prefer getting the password interactively (stdin) or from a configuration file. these solutions have different strengths and weaknesses, so choose according to the constraints of your situation. make sure the config file is not readable by unauthorized users if you go that way. it's not enough to make the file hard to find btw.
the interactive thing can be done with the shell builtin command read.
its description in the Shell Builtin Commands section in bash(1) includes
-s Silent mode. If input is coming from a terminal, characters are not echoed.
#!/usr/bin/env bash
INTERACTIVE=$([[ -t 0 ]] && echo yes)
if ! IFS= read -rs ${INTERACTIVE+-p 'Enter password: '} password; then
echo 'received ^D, quitting.'
exit 1
fi
echo password="'$password'"
read the bash manual for explanations of other constructs used in the snippet.
configuration files for shell scripts are extremely easy, just source ~/.mystuffrc in your script. the configuration file is a normal shell script, and if you limit yourself to setting variables there, it will be very simple.
for the description of source, again see Shell Builtin Commands.

Resources