incorrect syntax in bash - bash

I am trying to pass the GitHub branch creating API URL as a string into a function to get the status code. But as I tried in many ways, it is not working as I think,
the original URL is :
curl -s -X POST -u [user]:[token] -d '{"ref": "refs/heads/feature/bar", "sha": "'$SHA'"}' https://api.github.com/repos/$user/$repo/git/refs
what I am trying to do is, taking some part of this URL and passing into a function as a string as follows:
new_branch_creating_url="-X POST -u $username:$password -d '{"'ref'": "'refs/heads/'$new_branch_to_be_created''", "'sha'": ""$old_sha_value""}' https://api.github.com/repos/$username/$repository_name/git/refs"
My intention is to get the status code of that URL... and for that my function is
#get status code
get_status_code(){
test_url=$1
status_code=$(curl -s -I $test_url | awk '/HTTP/{print $2}')
#echo "status code :$status_code"
if [ $status_code == 404 ]
then
echo "wrong credentials passed..."
exit 1
else
return $status_code
fi
}
and while debugging the code, I am getting like
++ curl -s -I -X POST -u myusername:tokenid -d ''\''{ref:' refs/heads/branch2, sha: '1b2hudksjahhisabkhsd6asdihds8dsajbsualhcn}'\''' https://api.github.com/repos/myusername/myrepo/git/refs
++ awk '/HTTP/{print $2}'
and also my doubt is why sometimes I am received a wrong status code from the function above which I used to get a status code?
while debugging my code:
status_code=
+ '[' == 404 ']'
git_branches.sh: line 154: [: ==: unary operator expected
+ return
+ new_branch_status_code=2
+ echo 'new branch status code ... 2'
new branch status code ... 2
+ '[' 2 == 200 ']'
actually, it is nothing there on status_code from the function, but I received status code =2,,,
not only this time, but I also received 153 instead of 409... why it is like that?
I know this is not a relevant to ask but I have no choice and also it would be very helpful if someone helps me in the early stage of my learning in the shell scripting...
thank you...

Instead of curl -s -I, use :
curl -I -s -o /dev/null -w "%{http_code}"
which will give directly http_code.
You don't need awk

Related

Jenkins get status of last build with a specific parameter

I have a Jenkins freestyle job (yeah I know..) that sometimes needs to wait for another job with the same parameter to finish if it's running (or not run at all if it failed).
Using curl and the Jenkins API, is it possible to query a certain job and get the status of the last build where certainParam=certainValue?
(The reason I'm asking how to do that with curl is because it doesn't seem to be possible to do in freestyle job and the job can't be migrated to pipeilnes yet.. It seems like curl would be the east way..)
Thanks ahead!
So far I know there's no direct way to achieve it.
I wrote recursive script that search by the values on each build from the very last until match the information.
It prints every build url and the result of the query.
Dependencies
jq - install "jq.x86_64", Command-line JSON processor
Script
#!/bin/bash
user="admin"
pwd="11966e4dd8c33a730abf88e98fb952ebc3"
builds=$(curl -s --user $user:$pwd localhost:8080/job/test/api/json)
urls=$(echo $builds | jq '.builds[] | .url')
while read -r url
do
url=$(echo $url | sed -nr 's/"([^|]*)"/\1/p')
# get the build log
build=$(curl -s --user $user:$pwd "${url}api/json")
# transform the log in a simple structure
build=$(echo $build | jq '.result as $result | .actions[].parameters[]? | select(.name == "certainParam") | {(.name): .value, "result": $result}')
# check if the parameter value and the build result are the expected
result=$(echo $build | jq 'if .certainParam == "certainValue" and .result == "SUCCESS" then true else false end')
# print the result of each build
echo "url=${url}api/json;result=$result"
if [[ $result == true ]]
then
break
fi
done <<< $urls
Result
sh jenkins.sh
url=http://localhost:8080/job/test/12/api/json;result=false
url=http://localhost:8080/job/test/11/api/json;result=true

How to pass dynamic parameters in curl request

I am passing dynamic value to testing method and executing the curl request. There is an issue with $PARAMETERS.
When I execute the following method, I get error as below
Error:-
curl: option -F: requires parameter
curl: try 'curl --help' or 'curl --manual' for more information
Function:-
testing() {
local URL=$1
local HTTP_TYPE=$2
local PARAMETERS=$3
# store the whole response with the status at the and
HTTP_RESPONSE=$(curl -w "HTTPSTATUS:%{http_code}" -X $HTTP_TYPE $URL $PARAMETERS)
echo $HTTP_RESPONSE
}
URL='https://google.com'
HTTP_TYPE='POST'
PARAMETERS="-F 'name=test' -F 'query=testing'"
result=$(testing $URL $HTTP_TYPE $PARAMETERS)
FYI, above is a sample method and am using shell script. Kindly tell me how to solve this?
Since you are passing multiple command arguments in your 3rd argument, shell will do splitting as you are using unquoted variable inside your function.
You should better use shell array to store PARAMETERS argument.
Have your function as:
testing() {
local url="$1"
local httpType="$2"
shift 2 # shift 2 times to discard first 2 arguments
local params="$#" # store rest of them in a var
response=$(curl -s -w "HTTPSTATUS:%{http_code}" -X "$httpType" "$url" "$params")
echo "$response"
}
and call it as:
url='https://google.com'
httpType='POST'
params=(-F 'name=test' -F 'query=testing')
result=$(testing "$url" "$httpType" "${params[#]}")
Additionally you should avoid using all-caps variable names in your shell script to avoid clashes with env variables.
In case you're not using bash, you can also use:
params="-F 'name=test' -F 'query=testing'"
result=$(testing "$url" "$httpType" "$params")
Code Demo

Curl to return http status code along with the response

I use curl to get http headers to find http status code and also return response. I get the http headers with the command
curl -I http://localhost
To get the response, I use the command
curl http://localhost
As soon as use the -I flag, I get only the headers and the response is no longer there. Is there a way to get both the http response and the headers/http status code in in one command?
I was able to get a solution by looking at the curl doc which specifies to use - for the output to get the output to stdout.
curl -o - -I http://localhost
To get the response with just the http return code, I could just do
curl -o /dev/null -s -w "%{http_code}\n" http://localhost
the verbose mode will tell you everything
curl -v http://localhost
I found this question because I wanted independent access to BOTH the response and the content in order to add some error handling for the user.
Curl allows you to customize output. You can print the HTTP status code to std out and write the contents to another file.
curl -s -o response.txt -w "%{http_code}" http://example.com
This allows you to check the return code and then decide if the response is worth printing, processing, logging, etc.
http_response=$(curl -s -o response.txt -w "%{http_code}" http://example.com)
if [ $http_response != "200" ]; then
# handle error
else
echo "Server returned:"
cat response.txt
fi
The %{http_code} is a variable substituted by curl. You can do a lot more, or send code to stderr, etc. See curl manual and the --write-out option.
-w, --write-out
Make curl display information on stdout after a completed
transfer. The format is a string that may contain plain
text mixed with any number of variables. The format can be
specified as a literal "string", or you can have curl read
the format from a file with "#filename" and to tell curl
to read the format from stdin you write "#-".
The variables present in the output format will be
substituted by the value or text that curl thinks fit, as
described below. All variables are specified as
%{variable_name} and to output a normal % you just write
them as %%. You can output a newline by using \n, a
carriage return with \r and a tab space with \t.
The output will be written to standard output, but this
can be switched to standard error by using %{stderr}.
https://man7.org/linux/man-pages/man1/curl.1.html
I use this command to print the status code without any other output. Additionally, it will only perform a HEAD request and follow the redirection (respectively -I and -L).
curl -o -I -L -s -w "%{http_code}" http://localhost
This makes it very easy to check the status code in a health script:
sh -c '[ $(curl -o -I -L -s -w "%{http_code}" http://localhost) -eq 200 ]'
The -i option is the one that you want:
curl -i http://localhost
-i, --include Include protocol headers in the output (H/F)
Alternatively you can use the verbose option:
curl -v http://localhost
-v, --verbose Make the operation more talkative
I have used this :
request_cmd="$(curl -i -o - --silent -X GET --header 'Accept: application/json' --header 'Authorization: _your_auth_code==' 'https://example.com')"
To get the HTTP status
http_status=$(echo "$request_cmd" | grep HTTP | awk '{print $2}')
echo $http_status
To get the response body I've used this
output_response=$(echo "$request_cmd" | grep body)
echo $output_response
This command
curl http://localhost -w ", %{http_code}"
will get the comma separated body and status; you can split them to get them out.
You can change the delimiter as you like.
This is a way to retrieve the body "AND" the status code and format it to a proper json or whatever format works for you. Some may argue it's the incorrect use of write format option but this works for me when I need both body and status code in my scripts to check status code and relay back the responses from server.
curl -X GET -w "%{stderr}{\"status\": \"%{http_code}\", \"body\":\"%{stdout}\"}" -s -o - “https://github.com” 2>&1
run the code above and you should get back a json in this format:
{
"status" : <status code>,
"body" : <body of response>
}
with the -w write format option, since stderr is printed first, you can format your output with the var http_code and place the body of the response in a value (body) and follow up the enclosing using var stdout. Then redirect your stderr output to stdout and you'll be able to combine both http_code and response body into a neat output
To get response code along with response:
$ curl -kv https://www.example.org
To get just response code:
$ curl -kv https://www.example.org 2>&1 | grep -i 'HTTP/1.1 ' | awk '{print $3}'| sed -e 's/^[ \t]*//'
2>&1: error is stored in output for parsing
grep: filter the response code line from output
awk: filters out the response code from response code line
sed: removes any leading white spaces
For programmatic usage, I use the following :
curlwithcode() {
code=0
# Run curl in a separate command, capturing output of -w "%{http_code}" into statuscode
# and sending the content to a file with -o >(cat >/tmp/curl_body)
statuscode=$(curl -w "%{http_code}" \
-o >(cat >/tmp/curl_body) \
"$#"
) || code="$?"
body="$(cat /tmp/curl_body)"
echo "statuscode : $statuscode"
echo "exitcode : $code"
echo "body : $body"
}
curlwithcode https://api.github.com/users/tj
It shows following output :
statuscode : 200
exitcode : 0
body : {
"login": "tj",
"id": 25254,
...
}
My way to achieve this:
To get both (header and body), I usually perform a curl -D- <url> as in:
$ curl -D- http://localhost:1234/foo
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 20:59:21 GMT
{"data":["out.csv"]}
This will dump headers (-D) to stdout (-) (Look for --dump-header in man curl).
IMHO also very handy in this context:
I often use jq to get that json data (eg from some rest APIs) formatted. But as jq doesn't expect a HTTP header, the trick is to print headers to stderr using -D/dev/stderr. Note that this time we also use -sS (--silent, --show-errors) to suppress the progress meter (because we write to a pipe).
$ curl -sSD/dev/stderr http://localhost:1231/foo | jq .
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:08:22 GMT
{
"data": [
"out.csv"
]
}
I guess this also can be handy if you'd like to print headers (for quick inspection) to console but redirect body to a file (eg when its some kind of binary to not mess up your terminal):
$ curl -sSD/dev/stderr http://localhost:1231 > /dev/null
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:20:02 GMT
Be aware: This is NOT the same as curl -I <url>! As -I will perform a HEAD request and not a GET request (Look for --head in man curl. Yes: For most HTTP servers this will yield same result. But I know a lot of business applications which don't implement HEAD request at all ;-P
A one-liner, just to get the status-code would be:
curl -s -i https://www.google.com | head -1
Changing it to head -2 will give the time as well.
If you want a while-true loop over it, it would be:
URL="https://www.google.com"
while true; do
echo "------"
curl -s -i $URL | head -2
sleep 2;
done
Which produces the following, until you do cmd+C (or ctrl+C in Windows).
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:38 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:41 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:43 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:45 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:47 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:49 GMT
A clear one to read using pipe
function cg(){
curl -I --silent www.google.com | head -n 1 | awk -F' ' '{print $2}'
}
cg
# 200
Welcome to use my dotfile script here
Explanation
--silent: Don't show progress bar when using pipe
head -n 1: Only show the first line
-F' ': separate text by columns using separator space
'{print $2}': show the second column
Some good answers here, but like the OP I found myself wanting, in a scripting context, all of:
any response body returned by the server, regardless of the response status-code: some services will send error details e.g. in JSON form when the response is an error
the HTTP response code
the curl exit status code
This is difficult to achieve with a single curl invocation and I was looking for a complete solution/example, since the required processing is complex.
I combined some other bash recipes on multiplexing stdout/stderr/return-code with some of the ideas here to arrive at the following example:
{
IFS= read -rd '' out
IFS= read -rd '' http_code
IFS= read -rd '' status
} < <({ out=$(curl -sSL -o /dev/stderr -w "%{http_code}" 'https://httpbin.org/json'); } 2>&1; printf '\0%s' "$out" "$?")
Then the results can be found in variables:
echo out $out
echo http_code $http_code
echo status $status
Results:
out { "slideshow": { "author": "Yours Truly", "date": "date of publication", "slides": [ { "title": "Wake up to WonderWidgets!", "type": "all" }, { "items": [ "Why <em>WonderWidgets</em> are great", "Who <em>buys</em> WonderWidgets" ], "title": "Overview", "type": "all" } ], "title": "Sample Slide Show" } }
http_code 200
status 0
The script works by multiplexing the output, HTTP response code and curl exit status separated by null characters, then reading these back into the current shell/script. It can be tested with curl requests that would return a >=400 response code but also produce output.
Note that without the -f flag, curl won't return non-zero error codes when the server returns an abnormal HTTP response code i.e. >=400, and with the -f flag, server output is suppresses on error, making use of this flag for error-detection and processing unattractive.
Credits for the generic read with IFS processing go to this answer: https://unix.stackexchange.com/a/430182/45479 .
In my experience we usually use curl this way
curl -f http://localhost:1234/foo || exit 1
curl: (22) The requested URL returned error: 400 Bad Request
This way we can pipe the curl when it fails, and it also shows the status code.
Append a line "http_code:200" at the end, and then grep for the keyword "http_code:" and extract the response code.
result=$(curl -w "\nhttp_code:%{http_code}" http://localhost)
echo "result: ${result}" #the curl result with "http_code:" at the end
http_code=$(echo "${result}" | grep 'http_code:' | sed 's/http_code://g')
echo "HTTP_CODE: ${http_code}" #the http response code
In this case, you can still use the non-silent mode / verbose mode to get more information about the request such as the curl response body.
Wow so many answers, cURL devs definitely left it to us as a home exercise :) Ok here is my take - a script that makes the cURL working as it's supposed to be, i.e.:
show the output as cURL would.
exit with non-zero code in case of HTTP response code not in 2XX range
Save it as curl-wrapper.sh:
#!/bin/bash
output=$(curl -w "\n%{http_code}" "$#")
res=$?
if [[ "$res" != "0" ]]; then
echo -e "$output"
exit $res
fi
if [[ $output =~ [^0-9]([0-9]+)$ ]]; then
httpCode=${BASH_REMATCH[1]}
body=${output:0:-${#httpCode}}
echo -e "$body"
if (($httpCode < 200 || $httpCode >= 300)); then
# Remove this is you want to have pure output even in
# case of failure:
echo
echo "Failure HTTP response code: ${httpCode}"
exit 1
fi
else
echo -e "$output"
echo
echo "Cannot get the HTTP return code"
exit 1
fi
So then it's just business as usual, but instead of curl do ./curl-wrapper.sh:
So when the result falls in 200-299 range:
./curl-wrapper.sh www.google.com
# ...the same output as pure curl would return...
echo $?
# 0
And when the result is out of in 200-299 range:
./curl-wrapper.sh www.google.com/no-such-page
# ...the same output as pure curl would return - plus the line
# below with the failed HTTP code, this line can be removed if needed:
#
# Failure HTTP response code: 404
echo $?
# 1
Just do not pass "-w|--write-out" argument since that's what added inside the script
I used the following way of getting both return code as well as response body in the console.
NOTE - use tee which append the output into a file as well as to the console, which solved my purpose.
Sample CURL call for reference:
curl -s -i -k --location --request POST ''${HOST}':${PORT}/api/14/project/'${PROJECT_NAME}'/jobs/import' \
--header 'Content-Type: application/yaml' \
--header 'X-Rundeck-Auth-Token: '${JOB_IMPORT_TOKEN}'' \
--data "$(cat $yaml_file)" &>/dev/stdout | tee -a $response_file
return_code=$(cat $response_file | head -3 | tail -1 | awk {'print $2'})
if [ "$return_code" != "200" ]; then
echo -e "\Job import api call failed with rc: $return_code, please rerun or change pipeline script."
exit $return_code
else
echo "Job import api call completed successfully with rc: $return_code"
fi
Hope this would help a few.
To capture only response:
curl --location --request GET "http://localhost:8000"
To capture the response and its statuscode:
curl --location --request GET "http://localhost:8000" -w "%{http_code}"
To capture the response in a file:
curl --location --request GET "http://localhost:8000" -s -o "response.txt"
while : ; do curl -sL -w "%{http_code} %{url_effective}\\n" http://host -o /dev/null; done
This works for me:
curl -Uri 'google.com' | select-object StatusCode

How to capture actual response code and response body with HTTPie in bash script?

I have a bash script to call several APIs using HTTPie. I want to capture both the response body AND the HTTP status code.
Here is the best I have managed so far:
rspBody=$( http $URL --check-status --ignore-stdin )
statusCode=$?
Command substitution lets me get the body, and the "--check-status" flag gives me a simplified code (such as 0, 3, 4, etc) corresponding to the code family.
The problem is I need to distinguish between say a 401 and a 404 code, but I only get 4.
Is there a way to get the actual status code without having to do a verbose dump into a file and parse for stuff?
[edit]
This is my workaround in case it helps anyone, but I'd still like a better idea if you have one:
TMP=$(mktemp)
FLUSH_RSP=$( http POST ${CACHE_URL} --check-status --ignore-stdin 2> "$TMP")
STAT_FAMILY=$?
flush_err=$(cat "$TMP" | awk '{
where = match($0, /[0-9]+/)
if (where) {
print substr($0, RSTART, RLENGTH);
}
}' -)
rm "$TMP"
STDERR contains a (usually) 3-line message with the HTTP code in it, so I dump that to a temp file and am still able to capture the response body (from STDOUT) in a variable.
I then parse that temp file looking for a number, but this seems fragile to me.
There's no ready-made solution as such for this but it's achievable with a bit of scripting. For example:
STATUS=$(http -hdo ./body httpbin.org/get 2>&1 | grep HTTP/ | cut -d ' ' -f 2)
BODY=$(cat ./body)
rm ./body
echo $STATUS
# 200
echo $BODY
# { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "identity", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.0-dev" }, "origin": "84.242.118.58", "url": "http://httpbin.org/get" }
Explanation of the command:
http --headers \ # Print out the response headers (they would otherwise be supressed for piped ouput)
--download \ # Enable download mode
--output=./body \ # Save response body to this file
httpbin.org/get 2>&1 \ # Merge STDERR into STDOUT (with --download, console output otherwise goes to STDERR)
| grep HTTP/ \ # Find the Status-Line
| cut -d ' ' -f 2 # Get the status code
https://gist.github.com/jakubroztocil/ad06f159c5afbe278b5fcfa8bf3b5313

Bash with while, for and if works, until I add a second value

I have a script where I extract values from curl statements using jq. This script works well whenever the if-statement receives one true value, however, it breaks as soon as a second media package is found and has to be processed.
The code:
#!/bin/bash
source mh_auth.sh
rm -f times.txt
touch times.txt
#Read lines from file with mediapackage-id's that are on schedule.
while read LINE; do
curl --silent --digest -u $mh_username:$mh_password -H "X-Requested-Auth: Digest" -H "X-Opencast-Matterhorn-Authorization: true" "$mh_server/workflow/instances.json?mp=$LINE" > $LINE-curl.txt
#Format the file to make it more readable if you need to troubleshoot
/usr/bin/python -m json.tool $LINE-curl.txt > $LINE-curl-final.txt
#Test to see if this mediapackage has been published yet, and if it has, extract the necessary values for calculation
if grep -q -e 'Cleaning up"' $LINE-curl-final.txt; then
echo "Media Package found"
workflows=$( jq '.workflows.workflow[]' < $LINE-curl-final.txt )
for i in "${workflows}"
do
echo "Getting the end_time variable"
end_time=`echo ${i} | jq '.operations.operation[] | select(.description == "Cleaning up") | .completed'`
echo "Done getting end time variable"
echo "Getting ingest time variable"
ingest_time=`echo ${i} | jq '.operations.operation[] | select(.description == "Ingest") | .completed'`
echo "Done getting ingest time variable"
echo $ingest_time $end_time >> times.txt
echo last >> times.txt
done
else
echo "Media Package not published yet"
fi
rm -f $LINE-curl.txt
rm -f $LINE-curl-final.txt
done < scheduled-mediapackages.txt
A successful run yields the following:
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package not published yet
Media Package found
Getting the end_time variable
Done getting end time variable
Getting ingest time variable
Done getting ingest time variable
Whenever I add a second published media package containing exactly the same json as the first to my list, I get the following:
Media Package not published yet
Media Package not published yet
Media Package found
Getting the end_time variable
Done getting end time variable
Getting ingest time variable
Done getting ingest time variable
Media Package found
Getting the end_time variable
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
Done getting end time variable
Getting ingest time variable
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot index string with string
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
jq: error: Cannot iterate over null
Done getting ingest time variable
Any ideas how to fix this? I have been going around in circles and cannot get it to work?
I know it might seem arrogant to answer my own question, but for all those out there who has struggled like me, I wanted to add a fix. Before I get any slams for the code maybe being clumsy and "too much", just remember, I just started coding in bash, and this is working. Others might find this and with their experience be able to clean it up. For me though, this is what worked, so I stuck with it.
Thanks to Mark Setchell, I went back to the drawing board to figure out why the while loop was giving me a hard time. I figured out that some of the json return I got had more objects in it than others, so the jq command had to be adjusted to fit each command. I am only posting the relevant code for extracting the data I was looking for in my OP.
if grep -q -e 'Cleaning up"' $LINE-curl-final.txt; then
Because some media packages fail on first processing, then gets "picked up" to be re-processed, they have more than one workflow object. This if-statement searches for the succeeded one between them,and extracts the data with the adjusted jq query
get_count=$( jq -r '.[] | .totalCount' < $LINE-curl-final.txt )
if [[ "$get_count" -gt 1 ]]; then
state=$( jq -r '.workflows.workflow[] | select(.state == "SUCCEEDED" ) | .operations.operation[] | select(.description == "Cleaning up") | .state' < $LINE-curl-final.txt )
if [ "$state" = "SUCCEEDED" ];then
end_time=$( jq '.workflows.workflow[] | select(.state == "SUCCEEDED" ) | .operations.operation[] | select(.description == "Cleaning up") | .completed' < $LINE-curl-final.txt )
ingest_time=$( jq '.workflows.workflow [] | .operations.operation[] | select(.description == "Ingest") | .completed' < $LINE-curl-final.txt )
fi
else
state=$( jq -r '.workflows.workflow.operations.operation[] | select(.description == "Cleaning up") | .state' < $LINE-curl-final.txt )
if [ "$state" = "SUCCEEDED" ];then
end_time=$( jq '.workflows.workflow.operations.operation[] | select(.description == "Cleaning up") | .completed' < $LINE-curl-final.txt )
ingest_time=$( jq '.workflows.workflow.operations.operation[] | select(.description == "Ingest") | .completed' < $LINE-curl-final.txt )
fi
fi
Note the difference between
jq '.workflows.workflow.operations.operation[] | select(.description == "Cleaning up") | .completed'
and
jq '.workflows.workflow[] | select(.state == "SUCCEEDED" ) | .operations.operation[] | select(.description == "Cleaning up") | .completed'
I hope that this idea might help anybody else struggling with the same problem.

Resources