Azure CLI - check existence of frontend port with Bash - bash

I'm using bash to check if an application gateway frontend port exists using the following code.
I'm new to bash and it's giving me some syntax errors. What is the create syntax?
line 79: =: command not found
line 80: [: ==: unary operator expected
echo "Creating frontend port if it doesnt exist"
$frontendportExists = $(az network application-gateway frontend-port list -g $resourceGroupName
--gateway-name $appGatewayName --query "[?name=='appGatewayFrontendPort'] | length(#)")
if [ $frontendportExists == 0 ]; then
az network application-gateway frontend-port create \
--gateway-name $appGatewayName \
--name appGatewayFrontendPort \
--port 443 \
--resource-group $resourceGroupName
fi

There are a number of errors in your posted Bash code.
$ is not used as a prefix when assigning a value to a variable and spaces are not allowed surrounding the assignment = operator.
Wrong:
$frontendportExists = $(az network...
Should be:
frontendportExists=$(az network...
== is incorrect syntax with single bracket conditionals.
Wrong:
if [ $frontendportExists == 0 ]; then
Replace with (Bash only):
if [[ $frontendportExists == 0 ]]; then
Replace with (Posix):
if [ "$frontendportExists" = "0" ]; then
To prevent word splitting it is generally a good idea to double quote variables. e.g.
az network application-gateway frontend-port create \
--gateway-name "$appGatewayName" \
--name appGatewayFrontendPort \
--port 443 \
--resource-group "$resourceGroupName"
Please check your scripts using ShellCheck before posting in future.

Related

GCP Dataproc - cluster creation failing when using connectors.sh in initialization-actions

I'm creating a Dataproc cluster, and it is timing out when i'm adding the connectors.sh in the initialization actions.
here is the command & error
NUM_WORKER=2
TYPE=n1-highmem-8
CNAME=dataproc-poc
BUCKET=dataproc-spark-karan
REGION=us-central1
ZONE=us-central1-c
IMG_VERSION=2.0.29-debian10
PROJECT=versa-kafka-poc
NUM_WORKER=2
Karans-MacBook-Pro:dataproc-versa-sase karanalang$ gcloud beta dataproc clusters create $CNAME \
> --enable-component-gateway \
> --bucket $BUCKET \
> --region $REGION \
> --zone $ZONE \
> --no-address --master-machine-type $TYPE \
> --master-boot-disk-size 100 \
> --master-boot-disk-type pd-ssd \
> --num-workers $NUM_WORKER \
> --worker-machine-type $TYPE \
> --worker-boot-disk-type pd-ssd \
> --worker-boot-disk-size 100 \
> --image-version $IMG_VERSION \
> --scopes 'https://www.googleapis.com/auth/cloud-platform' \
> --project $PROJECT \
> --initialization-actions 'gs://dataproc-kafka/config/pip_install.sh','gs://dataproc-kafka/config/connectors.sh' \
> --metadata 'gcs-connector-version=2.0.0' \
> --metadata 'bigquery-connector-version=1.2.0' \
> --properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=false,spark:spark.executor.instances=6,spark:spark.executor.cores=2,spark:spark.eventLog.dir=gs://dataproc-spark-karan/joblogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-karan/joblogs'
Waiting on operation [projects/versa-kafka-poc/regions/us-central1/operations/8aa13a77-30a8-3a84-a949-16b4d8907c45].
Waiting for cluster creation operation...
WARNING: This cluster is configured to use network 'https://www.googleapis.com/compute/v1/projects/versa-kafka-poc/global/networks/default' and its associated firewall rules '[prometheus-nodeport]' which contains the following potential security vulnerability: 'port 8088 is open to the internet, this may allow arbitrary code execution via the YARN REST API. Use Component Gateway for secure remote access to the YARN UI and other cluster UIs instead: https://cloud.google.com/dataproc/docs/concepts/accessing/dataproc-gateways.'
Waiting for cluster creation operation...done.
ERROR: (gcloud.beta.dataproc.clusters.create) Operation [projects/versa-kafka-poc/regions/us-central1/operations/8aa13a77-30a8-3a84-a949-16b4d8907c45] timed out.
connectors.sh
#!/bin/bash
set -euxo pipefail
VM_CONNECTORS_HADOOP_DIR=/usr/lib/hadoop/lib
VM_CONNECTORS_DATAPROC_DIR=/usr/local/share/google/dataproc/lib
declare -A MIN_CONNECTOR_VERSIONS
MIN_CONNECTOR_VERSIONS=(
["bigquery"]="0.11.0"
["gcs"]="1.7.0")
# Starting from these versions connectors name changed:
# "...-<version>-hadoop2.jar" -> "...-hadoop2-<version>.jar"
declare -A NEW_NAME_MIN_CONNECTOR_VERSIONS
NEW_NAME_MIN_CONNECTOR_VERSIONS=(
["bigquery"]="0.13.5"
["gcs"]="1.9.5")
BIGQUERY_CONNECTOR_VERSION=$(/usr/share/google/get_metadata_value attributes/bigquery-connector-version || true)
GCS_CONNECTOR_VERSION=$(/usr/share/google/get_metadata_value attributes/gcs-connector-version || true)
UPDATED_GCS_CONNECTOR=false
is_worker() {
local role
role="$(/usr/share/google/get_metadata_value attributes/dataproc-role || true)"
if [[ $role != Master ]]; then
return 0
fi
return 1
}
min_version() {
echo -e "$1\n$2" | sort -r -t'.' -n -k1,1 -k2,2 -k3,3 | tail -n1
}
validate_version() {
local name=$1 # connector name: "bigquery" or "gcs"
local version=$2 # connector version
local min_valid_version=${MIN_CONNECTOR_VERSIONS[$name]}
if [[ "$(min_version "$min_valid_version" "$version")" != "$min_valid_version" ]]; then
echo "ERROR: $name-connector version should be greater than or equal to $min_valid_version, but was $version"
return 1
fi
}
update_connector() {
local name=$1 # connector name: "bigquery" or "gcs"
local version=$2 # connector version
if [[ $version ]]; then
if [[ $name == gcs ]]; then
UPDATED_GCS_CONNECTOR=true
fi
# validate new connector version
validate_version "$name" "$version"
if [[ -d ${VM_CONNECTORS_DATAPROC_DIR} ]]; then
local vm_connectors_dir=${VM_CONNECTORS_DATAPROC_DIR}
else
local vm_connectors_dir=${VM_CONNECTORS_HADOOP_DIR}
fi
# remove old connector
rm -f "${vm_connectors_dir}/${name}-connector-"*
# download new connector
# connector name could be in one of 2 formats:
# 1) gs://hadoop-lib/${name}/${name}-connector-hadoop2-${version}.jar
# 2) gs://hadoop-lib/${name}/${name}-connector-${version}-hadoop2.jar
local new_name_min_version=${NEW_NAME_MIN_CONNECTOR_VERSIONS[$name]}
if [[ "$(min_version "$new_name_min_version" "$version")" == "$new_name_min_version" ]]; then
local jar_name="${name}-connector-hadoop2-${version}.jar"
else
local jar_name="${name}-connector-${version}-hadoop2.jar"
fi
gsutil cp "gs://hadoop-lib/${name}/${jar_name}" "${vm_connectors_dir}/"
# Update or create version-less connector link
ln -s -f "${vm_connectors_dir}/${jar_name}" "${vm_connectors_dir}/${name}-connector.jar"
fi
}
if [[ -z $BIGQUERY_CONNECTOR_VERSION ]] && [[ -z $GCS_CONNECTOR_VERSION ]]; then
echo "ERROR: None of connector versions are specified"
exit 1
fi
# because connectors from 1.7 branch are not compatible with previous connectors
# versions (they have the same class relocation paths) we need to update both
# of them, even if only one connector version is set
if [[ -z $BIGQUERY_CONNECTOR_VERSION ]] && [[ $GCS_CONNECTOR_VERSION == "1.7.0" ]]; then
BIGQUERY_CONNECTOR_VERSION="0.11.0"
fi
if [[ $BIGQUERY_CONNECTOR_VERSION == "0.11.0" ]] && [[ -z $GCS_CONNECTOR_VERSION ]]; then
GCS_CONNECTOR_VERSION="1.7.0"
fi
update_connector "bigquery" "$BIGQUERY_CONNECTOR_VERSION"
update_connector "gcs" "$GCS_CONNECTOR_VERSION"
if [[ $UPDATED_GCS_CONNECTOR != true ]]; then
echo "GCS connector wasn't updated - no need to restart any services"
exit 0
fi
# Restart YARN NodeManager service on worker nodes so they can pick up updated GCS connector
if is_worker; then
systemctl kill -s KILL hadoop-yarn-nodemanager
fi
# Restarts Dataproc Agent after successful initialization
# WARNING: this function relies on undocumented and not officially supported Dataproc Agent
# "sentinel" files to determine successful Agent initialization and not guaranteed
# to work in the future. Use at your own risk!
restart_dataproc_agent() {
# Because Dataproc Agent should be restarted after initialization, we need to wait until
# it will create a sentinel file that signals initialization competition (success or failure)
while [[ ! -f /var/lib/google/dataproc/has_run_before ]]; do
sleep 1
done
# If Dataproc Agent didn't create a sentinel file that signals initialization
# failure then it means that initialization succeded and it should be restarted
if [[ ! -f /var/lib/google/dataproc/has_failed_before ]]; then
systemctl kill -s KILL google-dataproc-agent
fi
}
export -f restart_dataproc_agent
# Schedule asynchronous Dataproc Agent restart so it will use updated connectors.
# It could not be restarted sycnhronously because Dataproc Agent should be restarted
# after its initialization, including init actions execution, has been completed.
bash -c restart_dataproc_agent &
disown
From what i understand, the connectors.sh just ensures that correct version of the connectors is included in the cluster.
Also - without the connectors.sh, the installation is going through fine.
How do i debug/fix this ?
tia!
It seems you are using an old version of the init action script. Based on the documentation from the Dataproc GitHub repo, you can set the version of the Hadoop GCS connector without the script in the following manner:
gcloud dataproc clusters create ${CLUSTER_NAME} \
--region ${REGION} \
--metadata GCS_CONNECTOR_VERSION=2.2.2
For the BigQuery connectors (Spark or Hadoop MR), please use the up-to-date init action in the following manner:
--initialization-actions gs://${BUCKET}/connectors.sh \
--metadata bigquery-connector-version=1.2.0 \
--metadata spark-bigquery-connector-version=0.23.2
Notice that the same repo contains the updated pip-install init action as well.

syntax error: invalid arithmetic operator (error token is "

my script refuses to work under cron, but works fine while executing manually
#!/bin/bash
LOGFILE=/opt/xxx/scripts/rc.log
fUpMail() {
echo -e "Hello!\n\n$1 xx\n\nBest regards,\n\nCheck LLC 2k18" | mailx -s "$1 Rates were not imported" smone#smth.com
}
curDate=`date +%Y-%m-%d`
#postgres expression output being assigned to a variable
rateQ=`PGPASSWORD=xxxxxx psql -t -h xxx.xxx.228.134 -p 5433 -d axx2 -U axxx2bo << EOF
SELECT COUNT(id) FROM quote WHERE f_date = '$curDate'
EOF`
#same for demodb
rateDemo=`PGPASSWORD=xxx psql -t -h xx.xxx.42.14 -p 5432 -d axxxo -U acxxxxbo << EOF
SELECT COUNT(id) FROM quote WHERE f_date = '$curDate'
EOF`
#logging
printf "\n`date +%H:%M:%S` $curDate $rateQ $rateDemo\n" >> $LOGFILE
#check if rate value is not null
if [[ $(($rateQ)) != 0 ]] && [[ $(($rateDemo)) != 0 ]];
then
#posting a commentary into jira
curl -u xxx-support-bot:Rzq-xxx-xxx-gch -X POST --data '{"body": "'"$rateQ"' LIVE rates for '"$curDate"' were imported automatically'"\n"''"$rateDemo"' DEMO rates for '"$curDate"' were imported automatically"}' -H "Content-type: application/json" https://jira.in.xxx.com:443/rest/api/2/issue/xxxxxx-1024/comment >> $LOGFILE
else
#if rates were not imported
if [[ $(($rateQ)) == 0 ]];
then
echo "looks like LIVE rates for $curDate were not imported, please check manually!"
#sending a letter
fUpMail 'LIVE'
fi
if [[ $(($rateDemo)) == 0 ]];
then
echo "looks like DEMO rates for $curDate were not imported, please check manually!"
fUpMail 'DEMO'
fi
fi
cron sends following message:
/opt/xxx/scripts/ratecheck.sh: line 25: Timing is on.
6543
Time: 4.555 ms: syntax error: invalid arithmetic operator (error token is ".
6543
Time: 4.555 ms")
line 25 is
if [[ $(($rateQ)) != 0 ]] && [[ $(($rateDemo)) != 0 ]];
Could please someone help explaining what's wrong here?
You're getting more than a plain number back from psql and this is interfering with the type conversion you're doing. I think you can remove the extra output like this:
rateQ=$(PGPASSWORD=xxxxxx psql -t -h xxx.xxx.228.134 -p 5433 -d axx2 -U axxx2bo -q -c "SELECT COUNT(id) FROM quote WHERE f_date = '$curDate'")
rateDemo=$(PGPASSWORD=xxx psql -t -h xx.xxx.42.14 -p 5432 -d axxxo -U acxxxxbo -q -c "SELECT COUNT(id) FROM quote WHERE f_date = '$curDate'")
Note the addition of the -q flag:
-q
--quiet
Specifies that psql should do its work quietly. By default, it prints welcome messages and various informational output. If this option is used, none of this happens. This is useful with the -c option. This is equivalent to setting the variable QUIET to on.
https://www.postgresql.org/docs/9.0/app-psql.html
I also replaced your old-fashioned backticks with $() and put the SQL query into an argument.
If that doesn’t silence the additional output, you may also need to edit ~/.psqlrc for the user running the cron job and ensure there is no \timing line.

Trouble implementing chained variables in Makefile

I'm looking to use the previous variable to look up the next variable (these should all be set as envvars) I have this working in a smaller extent with just profile, but when i try chaining variables together they fall flat...
Expected output is that make build has TF_VARS_* set as envvars and that they are able to be used as regular variables for the next variable input. I assume that there should be an export of TF_VAR_aws_profile TF_VAR_aws_vpc and TF_VAR_aws_net into global scope, this seemed to work fine for when I was JUST using TF_VAR_aws_profile and was able to confirm with echo $TF_VAR_aws_profile
export TF_VAR_aws_profile TF_VAR_aws_vpc TF_VAR_aws_net
# An implicit guard target, used by other targets to ensure
# that environment variables are set before beginning tasks
assert-%:
# if [ "${${*}}" = "" ] ; then \
echo "Environment variable $* not set" ; \
exit 1 ; \
fi
vault:
## This is the problem section here....
# read -p "Enter AWS Profile Name: " profile ; \
vpc=$(shell aws --profile "$${profile}" --region us-west-2 ec2 describe-vpcs |jq -r '.[] | first | .VpcId') ; \
net=$(shell aws --profile "$${profile}" --region us-west-2 ec2 describe-subnets --filters "Name=vpc-id,Values=$${vpc}" |jq -r '.[] | first | .SubnetId') ; \
TF_VAR_aws_profile=$$profile TF_VAR_aws_vpc=$$vpc TF_VAR_aws_net=$$net make build && \
TF_VAR_aws_profile=$$profile make keypair && \
TF_VAR_aws_profile=$$profile make plan && \
TF_VAR_aws_profile=$$profile make apply
build: require-packer
aws-vault exec $(TF_VAR_aws_profile) --assume-role-ttl=60m -- \
"/usr/local/bin/packer" "build" \
"-var" "builder_subnet_id=$(TF_VAR_aws_net)" \
"-var" "builder_vpc_id=$(TF_VAR_aws_vpc)" \
"packer/vault.json"
require-packer: assert-TF_VAR_aws_vpc assert-TF_VAR_aws_net
# echo "[info] VPC: $(TF_VAR_aws_vpc)" ## Not set
# echo "[info] NET: $(TF_VAR_aws_net)" ## Not set
packer --version &> /dev/null
UPDATE
Looks like it might be something to do with line termination on the vault expression, added ; \ to continue the lines
Got it, it was a combination of TWO things, one running as $(shell \$CMD) does not evaluate the inner expression variables, I needed to revert to ticks instead to evaluate variables (keep in mind, variables STILL need $$ in order to evaluate inside of Makefiles) as well as the lines needed to be continued, so terminating only with ; is not effective, we need to use ; \.
Please note that I updated my variable names from above, they are now $$subnet and $$prvnet because I'm a spacing nazi...
export TF_VAR_aws_profile TF_VAR_aws_prvnet TF_VAR_aws_subnet
# An implicit guard target, used by other targets to ensure
# that environment variables are set before beginning tasks
assert-%:
# if [ "${${*}}" = "" ] ; then \
echo "Environment variable $* not set" ; \
exit 1 ; \
fi
vault:
# read -p "Enter AWS Profile Name: " profile ; \
prvnet=`aws --profile "$${profile}" --region us-west-2 ec2 describe-vpcs |jq -r '.[] | first | .VpcId'` ; \
subnet=`aws --profile "$${profile}" --region us-west-2 ec2 describe-subnets --filters "Name=vpc-id,Values=$${prvnet}" |jq -r '.[] | first | .SubnetId'` ; \
TF_VAR_aws_profile=$$profile TF_VAR_aws_prvnet=$$prvnet TF_VAR_aws_subnet=$$subnet make build && \
TF_VAR_aws_profile=$$profile make keypair && \
TF_VAR_aws_profile=$$profile make plan && \
TF_VAR_aws_profile=$$profile make apply
build: require-packer
aws-vault exec $(TF_VAR_aws_profile) --assume-role-ttl=60m -- \
"/usr/local/bin/packer" "build" \
"-var" "builder_subnet_id=$(TF_VAR_aws_net)" \
"-var" "builder_vpc_id=$(TF_VAR_aws_vpc)" \
"packer/vault.json"
require-packer: assert-TF_VAR_aws_prvnet assert-TF_VAR_aws_subnet
# echo "[info] VPC: $(TF_VAR_aws_prvnet)" ;
# echo "[info] NET: $(TF_VAR_aws_subnet)" ;
packer --version &> /dev/null

bash if greater than or equal to error

With the bash script below I am getting an error;
line 16: [: : integer expression expected but I can't figure out why? The error lies in if [ "$(log_date)" -le "$days_in_ms" ]; then Can anyone help and explain as BASH if statements confuse me
#!/bin/bash
region="eu-west-1"
retention_days="28"
days_in_ms="$(date +%s%3N --date=-"$retention_days"+days)"
log_date () {
aws logs describe-log-streams --region $region --log-group-name "$groups" | jq -r '.logStreams[].lastEventTimestamp' > /dev/null 2>&1
}
log_name () {
aws logs describe-log-streams --region $region --log-group-name "$groups" | jq -r '.logStreams[].logStreamName' > /dev/null 2>&1
}
mapfile -t logGroups < <(aws logs describe-log-groups --region $region | jq -r '.logGroups[].logGroupName') > /dev/null 2>&1
for groups in "${logGroups[#]}"; do
if [ "$(log_date)" -le "$days_in_ms" ]; then
log_name
fi
done
What's Happening
Note the details of the error message:
[: : integer expression expected
See the blank before the second :? That's where the value that [ is trying to operate on would be, if there were one. It's blank, meaning something [ was told would be a number is not there at all.
To provide an example of how this works:
$ [ foo -ge 1 ]
-bash: [: foo: integer expression expected
Note that the foo is given in the same position where you have a blank. Thus, a blank value is being parsed as a number.
Why It's Happening
Your log_date function does not return any output, because of its >/dev/null. Consequently, your code tries to parse the empty string it returns as a number, and (correctly) complains that it's getting an empty string in a position where an integer is expected.
In the future, run set -x to enable logging of each command before it's run, or invoke your script with bash -x yourscript when debugging, to identify these issues.

Conditional in Docker argument in Bash

In my bash file I've something like this
docker run -d \
--network=host \
--name my-service \
--log-driver="$LOGGING" \
if [[ "$LOGGING" == 'splunk' ]]; then
echo "--log-opt tag={{.ImageName}}/{{.Name}}/{{.ID}} \\";
echo "--log-opt env=NODE_ENV \\";
fi
But shellcheck complains by showing the following result. Any idea?
https://github.com/koalaman/shellcheck/wiki/SC1089
Build the argument list first (in an array), then call docker. This has the additional benefit of getting rid of the ugly line continuation characters.
docker_opts=(
-d
--network=host
--name my-service
--log-driver="$LOGGING"
--log-opt="$log_opt"
)
if [[ $LOGGING == splunk ]]; then
docker_opts+=(
--log-opt "tag={{.ImageName}}/{{.Name}}/{{.ID}} \\"
--log-opt "env=NODE_ENV \\"
)
fi
docker run "${docker_opts[#]}"
The main idea, though, is to keep the conditional code as small as possible and keep it separate from the unconditional code.
I suggest to use $(if ..; then ...; fi):
docker run -d \
--network=host \
--name my-service \
--log-driver="$LOGGING" \
$(if [[ "$LOGGING" == 'splunk' ]]; then
echo "--log-opt tag={{.ImageName}}/{{.Name}}/{{.ID}}"
echo "--log-opt env=NODE_ENV"
fi)

Resources