export environment varible from bash script with argument - bash

I am trying to export environment variable using a bash script which takes one argument. I tried to run the script using source command, but it does not work.
source ./script.sh dev
My example script below
#!/bin/bash
# Check 1 argument is passed with the script
if [ $# -ne 1 ]
then
echo "Usage : $0 AWS account: e.g. $0 dev"
exit 0
fi
# convert the input to uppercase
aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" )
if ! [[ "$aws_env" =~ ^(DEV|UAT|TRN|PROD)$ ]]; then
# check that correct account is provided
echo "Enter correct AWS account: dev or uat or trn or prod"
exit 0
else
# export environment variables
file="/home/xyz/.aws/key_${aws_env}"
IFS=$'\n' read -d '' -r -a lines < $file
export AWS_ACCESS_KEY_ID=${lines[0]}
export AWS_SECRET_ACCESS_KEY=${lines[1]}
export AWS_DEFAULT_REGION=ap-southeast-2
echo "AWS access keys and secret has been exported as environment variables for $aws_env account"
fi
Content of file /home/xyz/.aws/key_DEV. Its a sample only, not real keys.
$ cat key_DEV
123
xyz
When I say it does not work, nothing happens in the terminal, it closes the terminal when I run the script.
Further update:
When I run the script as is from the terminal without source (./script.sh dev) it seems to be working fine, with the debug (set -x), I can see all the outputs are correct.
However, the issue is when I run with source (source ./script.sh dev), then it fails (closes the terminal, now I know why, because of exit 0), and from the output captured from the debug command I can see that its not capturing $1 argument correctly. The error message "Enter correct AWS account: dev or uat or trn or prod". And, the value of $aws_env variable is blank.
I don't know why the two behaviors are different and how to fix it.
Final update:
The script seems to be fine. The issue was local to my computer. tr was defined as an alias in .bashrc file which was causing the problem. I just used typeset -u aws_env="$1" instead of aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" ). Thank you all for helping me to get this one resolved, specifically #markp-fuso.

Try using mapfile instead of read like so...
#!/usr/bin/env bash
# Check 1 argument is passed with the script
if [ $# -ne 1 ]
then
echo "Usage : $0 AWS account: e.g. $0 dev"
exit 0
fi
# convert the input to uppercase
aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" )
if ! [[ "$aws_env" =~ ^(DEV|UAT|TRN|PROD)$ ]]; then
# check that correct account is provided
echo "Enter correct AWS account: dev or uat or trn or prod"
exit 0
else
# export environment variables
file="/home/xyz/.aws/key_${aws_env}"
# IFS=$'\n' read -d '' -r -a lines < $file
mapfile -t lines < "$file"
export AWS_ACCESS_KEY_ID=${lines[0]}
export AWS_SECRET_ACCESS_KEY=${lines[1]}
export AWS_DEFAULT_REGION=ap-southeast-2
echo "AWS access keys and secret has been exported as environment variables for $aws_env account"
fi
Make sure you also put quotation marks around "$file"
Another little tip, Bash supports making variables uppercase directly, like so:
var="upper"
echo "${var^^}"

Related

bash programming - best practice setting up variables within a loop or not

I have the following logic in my bash/shell script. Where essentially, I'm trying to pass one argument manually and then passing in other values from a hidden file, like so:
if [[ $# != 1 ]]; then
echo "./tstscript.sh <IDNUM>" 2>&1
exit 1
fi
MYKEY=/dev/scripts/.mykey
if [ -f "$MYKEY" ]
then
IFS=';'
declare -a arr=($(< $MYKEY))
# DECLARE VARIABLES
HOSTNM=localhost
PORT=5432
PSQL_HOME=/bin
IDNUM=$1
DBU1=${arr[0]}
export HOSTNM PORT PSQL_HOME IDNUM DBU1 DBU2
$PSQL_HOME/psql -h $HOSTNM -p $PORT -U $DBU1 -v v1=$IDNUM -f t1.sql postgres
else
echo "Mykey not found"
fi
rt_code=?
exit 1
Am I declaring my variables in the right place? Should it be declaring within my if statement?
Most of your variables are redundant. psql already has a few well-known environment variables it will use if you don't specify various parameters on the command line. The others are just hard-coded, so it's not really important to define them. It really doesn't matter much where you define them, as long as you define them before they are used, since this isn't a very large script. It's a good sign that you've outgrown shell script and are ready for a more robust programming language when you start worrying about the design of the shell script.
if [[ $# != 1 ]]; then
echo "./tstscript.sh <IDNUM>" 2>&1
exit 1
fi
MYKEY=/dev/scripts/.mykey
if ! [ -f "$MYKEY" ]; then
echo "Mykey not found"
exit 1
fi
# You only use the first word/line of the file,
# so this should be sufficient.
IFS=";" read -a arr < "$MYKEY"
export PGHOST=localhost
export PGPORT=5432
export PGUSER=${arr[0]}
: ${PSQL_HOME:=/bin}
"$PSQL_HOME"/psql -v v1="$1" -f t1.sql postgres
When you fill /dev/scripts/.mykey with lines in the form key=value, you can source that file.
$ cat /dev/scripts/.mykey
DBU1=noober
FIELD2="String with space"
echo "Keep it clean, do not use commands like this echo in the file"
In your script you can activate the settings by sourcing the file
if [ -f "${MYKEY}" ]; then
. "${MYKEY}"
# Continue without an array, DBU1 and FIELD2 are set.

running multiline bash command over ssh does not work

I need to run a multi-line bash command over ssh, all possible attempt exhausted but no luck --
echo "3. All files found, creating remote directory on the server."
ssh -t $id#$host bash -c "'
if [[ -d ~/_tmp ]]; then
rm -rf ~/_tmp/*
else
mkdir ~/_tmp
fi
'" ;
echo "4. Sending files ..."
scp ${files[#]} $id#$host:~/_tmp/ ;
Here is the output --
user#linux:/tmp$ ./remotecompile
1. Please enter your id:
user
2. Please enter the names of the files that you want to compile
(Filenames *must* be space separated):
test.txt
3. All files found, creating remote directory on the server.
Password:
Unmatched '.
Unmatched '.
Connection to host.domain.com closed.
Please note, I do not want to put every 2-3 lines of bash if-then-else-fi commands into separate files.
What is the right way to do it?
Use an escaped heredoc to have its literal contents passed through. (Without the escaping, ie. using just <<EOF, shell expansions would be processed locally -- making for more interesting corner cases if you used variables inside your remotely-run code).
ssh "$id#$host" bash <<'EOF'
if [[ -d ~/_tmp ]]; then
rm -rf ~/_tmp/*
else
mkdir ~/_tmp
fi
EOF
If you want to pass arguments, doing so in an unambiguously correct manner gets more interesting (since there are two separate layers of shell parsing involved), but the printf '%q' builtin saves the day:
args=( "this is" "an array" "of things to pass" \
"this next one is a literal asterisk" '*' )
printf -v args_str '%q ' "${args[#]}"
ssh "$id#$host" bash -s "$args_str" <<'EOF'
echo "Demonstrating local argument processing:"
printf '%q\n' "$#"
echo "The asterisk is $5"
EOF
This works for me:
ssh [hostname] '
if [[ -d ~/_tmp ]]; then
rm -rf ~/_tmp
else
mkdir ~/_tmp
fi
'

Create shell sub commands by hierarchy

I'm trying to create a system for my scripts -
Each script will be located in a folder, which is the command itself.
The script itself will act as a sub-command.
For example, a script called "who" inside a directory called "git",
will allow me to run the script using git who in the command line.
Also, I would like to create a sub command to a psuedo-command, meaning a command not currently available. E.g. some-arbitrary-command sub-command.
Is that somehow possible?
I thought of somehow extending https://github.com/basecamp/sub to accomplish the task.
EDIT 1
#!/usr/bin/env bash
command=`basename $0`
subcommand="$1"
case "$subcommand" in
"" | "-h" | "--help" )
echo "$command: Some description here" >&2
;;
* )
subcommand_path="$(command -v "$command-$subcommand" || true)"
if [[ -x "$subcommand_path" ]]; then
shift
exec "$subcommand_path" "${#}"
return $?
else
echo "$command: no such command \`$subcommand'" >&2
exit 1
fi
;;
esac
This is currently the script I run for new custom-made commands.
Since it's so generic, I just copy-paste it.
I still wonder though -
can it be generic enough to just recognize the folder name and create the script by its folder name?
One issue though is that it doesn't seem to override the default command name, if it supposed to replace it (E.g. git).
EDIT 2
After tinkering around a bit this is what I came to eventuall:
#!/usr/bin/env bash
COMMAND=`basename $0`
SUBCOMMAND="$1"
COMMAND_DIR="$HOME/.zsh/scripts/$COMMAND"
case "$SUBCOMMAND" in
"" | "-h" | "--help" )
cat "$COMMAND_DIR/help.txt" 2>/dev/null ||
command $COMMAND "${#}"
;;
* )
SUBCOMMAND_path="$(command -v "$COMMAND-$SUBCOMMAND" || true)"
if [[ -x "$SUBCOMMAND_path" ]]; then
shift
exec "$SUBCOMMAND_path" "${#}"
else
command $COMMAND "${#}"
fi
;;
esac
This is a generic script called "helper-sub" I symlink to all the script directories I have (E.g. ln -s $HOME/bin/helper-sub $HOME/bin/ssh).
in my zshrc I created this to call all the scripts:
#!/usr/bin/env bash
PATH=${PATH}:$(find $HOME/.zsh/scripts -type d | tr '\n' ':' | sed 's/:$//')
export PATH
typeset -U path
for aliasPath in `find $HOME/.zsh/scripts -type d`; do
aliasName=`echo $aliasPath | awk -F/ '{print $NF}'`
alias ${aliasName}=${aliasPath}/${aliasName}
done
unset aliasPath
Examples can be seen here: https://github.com/iwfmp/zsh/tree/master/scripts
You can't make a directory executable as a script, but you can create a wrapper that calls the scripts in the directory.
You can do this either with a function (in your profile script or a file in your FPATH) or with a wrapper script.
A simple function might look like:
git() {
local subPath='/path/to/your/git'
local sub="${1}" ; shift
if [[ -x "${subPath}/${1}" ]]; then
"${subPath}/${sub}" "${#}"
return $?
else
printf '%s\n' "git: Unknown sub-command '${sub}'." >&2
return 1
fi
}
(This is the same way that the sub project you linked works, just simplified.)
Of course, if you actually want to create a sub-command for git specifically (and that wasn't just an example), you'll need to make sure that the built-in git commands still work. In that case you could do like this:
git() {
local subPath='/path/to/your/git'
local sub="${1}"
if [[ -x "${subPath}/${sub}" ]]; then
shift
"${subPath}/${sub}" "${#}"
return $?
else
command git "${#}"
return 1
fi
}
But it might be worth pointing out in that case that git supports adding arbitrary aliases via git config:
git config --global alias.who '!/path/to/your/git/who'

Bash Script exiting without error while using diff

I'm fairly new to using bash and was trying to create an autograder script for running some test cases. Currently my bash script seems to be acting strangely; when I have the -e flag set bash will just exit when a diff has a positive size, and when the -e flag is not set the script ignores any differences in the diff files and says that all tests passed.
The script exits immediately after the "write_diff_out=...." command, the next line is not printed. I've only included the diffing portion of the script as everything else runs fine (the files all exist as well).
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output...";
for file in `ls test_progs`;
do
file=$(echo $file | cut -d '.' -f1);
echo "$file";
write_diff_out=$(diff ./log/$file.writeback.out ./log/$file.writeback.gold.out > ./diff/$file.writeback.diff);
echo "Finished write_diff";
program_diff_out=$(diff -u <(grep -E '###' ./log/$file.program.out) <(grep -E '###' ./log/$file.program.gold.out) > ./diff/$file.program.diff);
echo "Finished program diff";
if [ -z "$write_diff_out" ] && [ -z "$program_diff_out" ]; then
printf "%20s:\e[0;32mPASSED\e[0m\n" "$file";
else
printf "%20s:\e[0;31mFAILED\e[0m\n" "$file";
fi
done
echo "> Done comparing test outputs.";
Feel free to suggest a better way of formatting the diff commands as well, I know there are different methods of writing them.
I don't exactly know what's your problem, but I have rewritten your script to conform to some best practices. Perhaps it will work better.
#!/bin/bash
# Debugging mode: prints every command as executed, remove when uneeded
set -x
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output..."
cd test_progs
for file in *; do
file="$(echo "$file" | sed 's/\.[^.]*$//')"
echo "$file"
# will PASS when both diffs return non-zero
if ! diff "log/$file.writeback.out" \
"log/$file.writeback.gold.out" > \
"diff/$file.writeback.diff" && \
! diff -u <(grep -E ### "log/$file.program.out") \
<(grep -E ### "log/$file.program.gold.out") > \
"diff/$file.program.diff"; then
printf '%20s:\e[0;32mPASSED\e[0m\n' "$file"
else
printf '%20s:\e[0;31mFAILED\e[0m\n' "$file"
fi
done
echo "> Done comparing test outputs."
It avoids parsing ls, use quotes where it is due, used [[ instead of [ (you don't need to quote variables inside of [[), and it tests if the written file is empty instead of storing something at a variable.
If you really wanted to store diff's output in a variable, you would do this:
write_diff_out="$(diff "log/$file.writeback.out" "log/$file.writeback.gold.out" | tee "diff/$file.writeback.diff")"
Then $write_diff_out would contain the same data the diff/$file.writeback.diff file has.
EDIT: edit my answer a bit, to implement some of the things in the comments.

bash save last user input value permanently in the script itself

Is it possible to save last entered value of a variable by the user in the bash script itself so that I reuse value the next time while executing again?.
Eg:
#!/bin/bash
if [ -d "/opt/test" ]; then
echo "Enter path:"
read path
p=$path
else
.....
........
fi
The above script is just a sample example I wanted to give(which may be wrong), is it possible if I want to save the value of p permanently in the script itself to so that I use it somewhere later in the script even when the script is re-executed?.
EDIT:
I am already using sed to overwrite the lines in the script while executing, this method works but this is not at all good practice as said. Replacing the lines in the same file as said in the below answer is much better than what I am using like the one below:
...
....
PATH=""; #This is line no 7
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";
name="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")";
...
if [ condition ]
fi
path=$path
sed -i '7s|.*|PATH='$path';|' $DIR/$name;
Someting like this should do the asked stuff :
#!/bin/bash
ENTERED_PATH=""
if [ "$ENTERED_PATH" = "" ]; then
echo "Enter path"
read path
ENTERED_PATH=$path
sed -i 's/ENTERED_PATH=""/ENTERED_PATH='$path'/g' $0
fi
This script will ask user a path only if not previously ENTERED_PATH were defined, and store it directly into the current file with the sed line.
Maybe a safer way to do this, would be to write a config file somewhere with the data you want to save and source it . data.saved at the begining of your script.
In the script itself? Yes with sed but it's not advisable.
#!/bin/bash
test='0'
echo "test currently is: $test";
test=`expr $test + 1`
echo "changing test to: $test"
sed -i "s/test='[0-9]*'/test='$test'/" $0
Preferable method:
Try saving the value in a seperate file you can easily do a
myvar=`cat varfile.txt`
And whatever was in the file is not in your variable.
I would suggest using the /tmp/ dir to store the file in.
Another option would be to save the value as an extended attribute attached to the script file. This has many of the same problems as editing the script's contents (permissions issues, weird for multiple users, etc) plus a few of its own (not supported on all filesystems...), but IMHO it's not quite as ugly as rewriting the script itself (a config file really is a better option).
I don't use Linux, but I think the relevant commands would be something like this:
path="$(getfattr --only-values -n "user.saved_path" "${BASH_SOURCE[0]}")"
if [[ -z "$path" ]]; then
read -p "Enter path:" path
setfattr -n "user.saved_path" -v "$path" "${BASH_SOURCE[0]}"
fi

Resources