Formatting JSON string for AWS IoT - bash

I'm getting "JSON format error" from the AWS console when I try to publish a temperature value from a variable; this works correctly:
mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m '{"state": {"desired":
{"temperature": 1 }}}' -q 1
I want to replace "1" with a variable so, I create a shell with the mosquitto_pub etc.., and I want to pass an argument to the shell, calling "./publish.sh Temperature_Value", where Temperature value is an int:
Trying this I get errors from AWS console:
DATA=${1}
mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m '{"state": {"desired":
{"temperature": $DATA }}}' -q 1
What am I doing wrong?
Thanks

Your escaping is wrong. this is the right escaping :
mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m "{\"state\": {\"desired\":
{\"temperature\": $1 }}}" -q 1
Remember that variables within single quotes ' are not interpolated.
Regards!

Related

How to pass parameter expansions into qsub?

I'm trying to use qsub to submit multiple parallel jobs, but I'm running into trouble with passing parameter substitutions into qsub. I'm using the -V option, but it doesn't seem to recognize what ${variable} is. Here's some code I tried running:
qsub -cwd -V -pe shared 4 -l h_data=8G,h_rt=00:10:00,highp -N bt2align3 -b y "projPath="$SCRATCH/CUTnTag/data_kayaokur2020"; sample="K4m3_rep1"; cores=8;
bowtie2 --end-to-end --very-sensitive --no-mixed --no-discordant --phred33 -I 10 -X 700
-p ${cores}
-x ${projPath}/bowtie2_index/GRCh38_noalt_analysis/GRCh38_noalt_as
-1 ${projPath}/raw_fastq/${sample}_R1.fastq.gz
-2 ${projPath}/raw_fastq/${sample}_R2.fastq.gz
-S ${projPath}/alignment/sam/${sample}_bowtie2.sam &> ${projPath}/alignment/sam/bowtie2_summary/${sample}_bowtie2.txt"
I just get an error that says "Invalid null command."
Is qsub not able to recognize parameter expansions? Is there a different syntax I should be using? Thanks.

Suppress Errors in readarray Command in Bash

I'm parsing AWS policy documents and I'm trying to send the errors in this command to /dev/null so that the user doesn't see them.
This is my code:
readarray -t aws_policy_effects < \
<( if aws iam get-policy-version --policy-arn "$aws_policy_arn" \
--version-id "$aws_policy_version_id" --profile="$aws_key" 2> /dev/null | \
jq -r '.PolicyVersion.Document.Statement[].Effect'
then
true
else
aws iam get-policy-version --policy-arn "$aws_policy_arn" \
--version-id "$aws_policy_version_id" --profile="$aws_key" | \
jq -r '.PolicyVersion.Document.Statement.Effect'
fi)
I'm using an 'if' statement so that the right jq query gets used based on the aws policy that we're reading.
I will get this error:
jq: error (at <stdin>:22): Cannot index string with string "Effect"
Because this command always run as the first condition of the if (if there's no list the statement of the AWS policy document):
++ jq -r '.PolicyVersion.Document.Statement[].Effect'
++ aws iam get-policy-version --policy-arn arn:aws:iam::123456789101:policy/IP_RESTRICTION --version-id v11 --profile=company-lab
Why isn't the error being buried by sending it to /dev/null? How can I get the error to not print out to the screen using this if statement?
I'm using an 'if' statement so that the right jq query gets used based on the aws policy that we're reading.
But jq is telling you that your jq query is not always valid. So one option would be to make your jq query more robust, e.g. by using postfix ? (e.g. .Effect?), or testing the type of the input, etc.
I didn't redirect jq's standard error, only the standard error of aws. Putting the 2> /dev/null to the end of the jq command works.

If proxy is down, get a new one

I'm writing my first bash script
LANG="en_US.UTF8" ; export LANG
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt"
This script also use curl, but if proxy is down it gives me this error:
failed to connect PROXY:PORT
How can I make bash script run again, so it can get another proxy address from proxy.txt
Thanks in advance
Run it in a loop until the curl succeeds, for example:
export LANG="en_US.UTF8"
while true; do
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
Notice the && break at the end of the curl command.
That is, if the curl succeeds, break out of the infinite loop.
If you have multiple curl commands and you need all of them to succeed,
then chain them all together with &&, and add the break after the last one:
curl url1 && \
curl url2 && \
break
Lastly, as #Inian pointed out,
you could use the --proxy flag to pass a proxy URL to curl without the extra step of setting https_proxy, for example:
curl --proxy "$(shuf -n 1 proxy.txt)" --data "mydata${RUID}" --user-agent "myuseragent"
Lastly, note that due to the randomness, a randomly selected proxy may come up more than once until you find one that works.
Avoid that, you could read iterate over the shuffled proxies instead of an infinite loop:
export LANG="en_US.UTF8"
shuf proxy.txt | while read -r proxy; do
ruid=$(php -f randuid.php)
curl --proxy "$proxy" --data "mydata${ruid}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
I also lowercased your user-defined variables,
as capitalization is not recommended for those.
I know i accepted #janos answer but since I can't edit his I'm going to add this
response=$(curl --proxy "$proxy" --silent --write-out "\n%{http_code}\n" https://myurl.com/url)
status_code=$(echo "$response" | sed -n '$p')
html=$(echo "$response" | sed '$d')
case "$status_code" in
200) echo 'Working!'
;;
*)
echo 'Not working, trying again!';
exec "$0" "$#"
esac
This will run my script again if it gives 503 status code which i wanted :)
And with #janos code it will run again if proxy is not working.
Thank you everyone i achieved what i wanted.

Shell - Command ignoring extra parameters

I've written a shell script that get's my IP address via curl from http://checkip.amazonaws.com
What i'm attempting to do is get a list of all my security groups and add that IP address to each security group via the AWS CLI.
The script I have so far is:
#!/bin/bash
# Get IP Address
IP_ADDR="`curl http://checkip.amazonaws.com`"
IP_ADDR="$IP_ADDR/32"
cat /dev/null > /tmp/ec2.info
tmpFile="/tmp/ec2.info"
ec2Info=`ec2-describe-group --region eu-west-1 > $tmpFile`
sec_groups=`cat $tmpFile | grep GROUP | cut -f4`
echo "You are using IP Address: $IP_ADDR"
echo ""
for security_group in $sec_groups
do
echo ""
echo $security_group
echo ""
ec2-authorize --region eu-west-1 $security_group –p 22 -s $IP_ADDR
done
The script works fine getting the IP address and a list of my security groups. However, I get an issue when the script gets to the ec2-authorize line.
I get an error message:
WARNING: Ignoring extra parameter(s): [ ?p, 22 ]
Required option '-p, --port-range PORT-RANGE' missing (-h for usage)
As you can see from the script i've added the -p parameter specifying the port. It seems to be ignoring everything after the $security_group variable.
Any ideas?
Instead of a minus sign, you typed an en dash (Unicode U+2013). So just replace –p with -p.
I noticed in your answer, you fixed this without realizing it. That's why it worked, not because you put the args into a var.
And this is why there was a question mark in the error message: [ ?p, 22 ]

Why is this bash/CURL call to REST services giving inconsistent results with parameters?

I have written a smoke-testing script that uses BASH script & Curl to test RESTful web services we're working on. The script reads a file, and interprets each line as a URL suffix and parameters for a Curl REST call.
Unfortunately, the script gives unexpected results when I adapted it to run HTTP POST calls as well as GET calls. It does not give the same results running the command on its own, vs. in script:
The BASH Script:
IFS=$'\n' #Don't split an input URL line at spaces
RESTHOST='hostNameAndPath' #Can't give this out
URL="/activation/v2/activationInfo --header 'Content-Type:Application/xml'"
URL2="/activation/v2/activationInfo"
OUTPUT=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL -d #"./activation_post.txt" -X POST`
echo 'out:' $OUTPUT
OUTPUT2=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL2 --header 'Content-Type:Application/xml' -d #'./activation_post.txt' -X POST`
echo 'out2:' $OUTPUT2
Results Out:
out: 505
out2: 200
So, the first call fails (HTTP return code 505, HTTP Version Not Supported), and the second call succeeds (return code "OK").
Why does the first call fail, and how do I fix it? I've verified they should execute the same command (evaluating in echo). I am sure there is something basic I'm missing, as I am just NOW learning Bash scripting.
I think I have found the problem! It is caused by IFS=$'\n'! Because of this, variable expansion does not work as expected. It does not let to split the arguments specified in the URL string!
As a result the SERVER_PROTOCOL variable on the server side will be set to '--header Content-Type:Application/xml HTTP/1.1' instead of "HTTP/1.1", and the CONTENT_TYPE will be 'application/x-www-form-urlencoded' instead of 'Application/xml'.
To show the root of the problem in detail:
VAR="Solaris East"
printf "+%s+ " $VAR
echo "==="
IFS=$'\n'
printf "+%s+ " $VAR
Output:
+Solaris+ +East+ ===
+Solaris East+
So the $VAR expansion does not work as expected because of IFS=$'\n'!
Solution: Do not use IFS=$'\n' and replace space to %20 in URL!
URL=${URL2// /%20}" --header Content-Type:Application/xml"
In this case your first curl call will work properly!
If You still use IFS=$'\n' and give --header option in the command line it will not work properly if URL contains a space, because the server will fail to process it (I tested on apache)!
Even You still cannot use HEADER="--header Content-Type:Application/xml" as expanding $HEADER will result one(!) argument for curl, namely --header Content-Type:Application/xml instead of splitting them into two.
So I may suggest to replace spaces in URL to %20 anyway!
The single quotes surrounding Content-Type:Application/xml, because they are quoted in the value of URL are treated as literal quotes and not removed when $URL is expanded in that call to curl. As a result, you are passing an invalid HTTP header. Just use
URL="/activation/v2/activationInfo --header Content-Type:Application/xml"
OUTPUT=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL -d #"./activation_post.txt" -X POST`
However, it's not a great idea to rely on word-splitting like this to combine two separate pieces of the call to curl in a single variable. Try something like this instead:
URLPATH="activation/v2/activationInfo"
HEADERS=("--header" "Content-Type:Application/xml")
OUTPUT=$( curl -SL -m 30 -w "%{http_code}" -o /dev/null "$RESTHOST/$URL" "${HEADERS[#]}" -d #'./activation_post.txt' -X POST )

Resources