Bash command for setting aws command output to a variable - bash

I want to assign a variable in shell script for the below aws command..
If the command is successful,I want to assign the output to S3_BUCKET_REGION.Eg: S3_BUCKET_REGION = us-east-1.
S3_BUCKET_REGION=$( aws s3api get-bucket-location --bucket ${TF_STATE_S3_BUCKET} | jq -r '.LocationConstraint // "us-east-1"' )
But if the bucket does not exist,the error for the above command is "An error occurred (NoSuchBucket) when calling the GetBucketLocation operation: The specified bucket does not exist"
I want to capture this error and echo it in the script.
So if the command runs successfully,I want to assign to a variable.If not ,I want to echo the error.How to do conditional statement for this?

Usually commands sends output to STDOUT and errors to STDERR.
$() grabs only STDOUT, so you should finish your command with redirection of STDERR to STDOUT
MYVAR=$( blablabla 2>&1 )

Related

passing null value from az cli to bash without exiting

I'm trying to get the value of a resource in azure via AZ CLI, and pass that value to a variable in bash.
id=$(az synapse workspace show -n $name -g $rsname --query 'identity.principalId' -o tsv 2<&1)
if [[ $id == *"Not Found"* ]];
then
echo "Workspace already deleted."
fi
If the resource is not there, I am redirecting the output to the variable with 2<&1 so I can deal with it in the if-then conditional. $id is getting assigned the output correctly, but AZ CLI is still exiting the script with error "not found".
Is there anyway to keep it from exiting?
In your bash command you are using 2<&1 that's why it exited the script with error "not found"
You can achieve this by using "2>&-".
Make sure you have to use the Greater than (>) symbol.
id=$(az synapse workspace show -n '<synapse Name>' -g '<Resource Group Name>' --query 'identity.principalId' -o tsv 2>&-)
using "2>&-":
I can be able to get the principal id.
using "2>&1":
here I am not able to get the principal id.
Reference
using bash check the program is exist or not
Usage of bash IO Redirection

Get kubectl command error message in a variable in bash script

I am executing a kubectl command in a bash script and storing the output in a variable. When the kubectl command executes successfully I am getting the correct output in the variable, but when it does not execute successfully the variable is empty and the error message is not available in the variable. I want the error values to be stores in the variable.
Example:
GET_PODS_COMMAND="$(kubectl get pods -n mlsh-$JOBNAMESPACE --selector app=$POD_SELECTOR --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')" #kubectl command. Please assume mlsh-$JOBNAMESPACE and $POD_SELECTOR have correct values
GET_PODS_COMMAND_OUT=$GET_PODS_COMMAND
echo $GET_PODS_COMMAND_OUT #Printing command result
When the command execution is successful I get the pod name in GET_PODS_COMMAND_OUT but when the command output is "No Resources Found" value for GET_PODS_COMMAND_OUT is blank.
I read that I have to redirect the stderr to stdout as stated in below articles:
Bash how do you capture stderr to a variable?
https://www.reddit.com/r/kubernetes/comments/98s8v6/why_cant_i_store_outputs_of_kubectl_commands_in_a/
Still struggling to understand how exactly to achieve this.
Here is what I have tried:
GET_PODS_COMMAND_OUT="$(GET_PODS_COMMAND 2>&1)" #gives the error: GET_PODS_COMMAND: command not found
New to linux so any help is appreciated. Thank you.

Error handling for aws cli in bash doesnt cause it to exit

I have a bash script running bunch of stuff along with aws listobject command. Recently we noted that the std err would have an error code but the script does not exit with non zero code. I hoped that the set -e should handle any command failures but it doesnt seem to do the trick.
Script looks like this:
#!/bin/bash
set -e
# do stuff
aws s3api list-objects --bucket xyz --prefix xyx --output text --query >> files.txt
# do stuff
Error in Stderr :
An error occurred (SlowDown) when calling the ListObjects operation (reached max retries: 4): Please reduce your request rate.
Objective:
I want the bash script to fail & exit when it encounters a problem with the aws cli commands. I can add an explicit check on ($? != 0) but wondering if there is better way to do this.
For me, this did the trick:
set -e -o pipefail
The #codeforrester's link says:
set -o pipefail is a workaround by returning the exit code of the first failed process

linux - bash: pipe _everything_ to a logfile

In an interactive bash script I use
exec > >(tee -ia logfile.log)
exec 2>&1
to write the scripts output to a logfile. However, if I ask the user to input something this is not written to this file:
read UserInput
Also, I issue commands with $UserInput as parameter. These command are also not written to the logfile.
The logfile should contain everything my script does, i.e. what the user entered interactively and also the resulting commands along with their output.
Of course I could use set -x and/or echo "user input: "$UserInput, but this would also be sent to the "screen". I dont want to read anything else on the screen except what my script or the commands echo.
How can this be done?

redirect all output in a bash script when using set -x

I have a bash script that has set -x in it. Is it possible to redirect the debug prints of this script and all its output to a file? Ideally I would like to do something like this:
#!/bin/bash
set -x
(some magic command here...) > /tmp/mylog
echo "test"
and get the
+ echo test
test
output in /tmp/mylog, not in stdout.
This is what I've just googled and I remember myself using this some time ago...
Use exec to redirect both standard output and standard error of all commands in a script:
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1
For more redirection magic check out Advanced Bash Scripting Guide - I/O Redirection.
If you also want to see the output and debug on the terminal in addition to in the log file, see redirect COPY of stdout to log file from within bash script itself.
If you want to handle the destination of the set -x trace output independently of normal STDOUT and STDERR, see bash storing the output of set -x to log file.
the -x output goes to stderr, so to log it do:
set -x
exec 2>/tmp/mylog
To redirect stderr and stdout:
exec &>> $LOG_FILE_NAME
If you want to append to file. To overwrite file:
exec &> $LOG_FILE_NAME
In my case, the script was being called multiple times from elsewhere, and I wasn't seeing everything, so I did an append instead, and it worked:
exec 1>>FILENAME 2>&1
set -x
To avoid confusion, be sure to delete FILENAME before each run.

Resources