From bash/curl I am consuming an API that receives a POST, perform heavy long tasks and returns a 200 on success. As we were facing some timeouts in WAF, API has been improved to accept header:
--header "Expect:102-Processing"
If API receives that header it sends a HTTP 102 every 20 secs until process finishes and sends a HTTP 200. This should be enough to prevent timeouts.
What I have to do to deal with those HTTP 102?
I added that header to my curl command but as soon as it receives first 102, curl command finishes.
I was thinking that maybe there is a parameter in curl to wait until 200 or error.
Another option I have in mind is waiting in a loop querying for status but I don't know how to instruct curl to monitor that connection
This is a test version of my bash script.
#!/bin/bash
clear
function_triggerFooAdapter()
{
pFooUrl=$1
pPayload=$2
pCURLConnectTimeout=$3
pWaitForFooResponse=$4
pAddExpect102Header=$5
rm ./tmpResult.html 2>/dev/null
rm ./tmpResult.txt 2>/dev/null
echo "Triggering internal Foo adapter $pFooAdapterName"
echo "Full URL=$pFooUrl"
echo "Payload to send=$pPayload"
echo "Curl connect-timeout=$pCURLConnectTimeout"
echo "WaitForFooResponse=$pWaitForFooResponse"
echo "AddExpect102Header=$pAddExpect102Header"
if [[ "$pAddExpect102Header" = true ]]; then
text102Header="Expect:102-Processing"
else
text102Header="NoExpect;" # send innofensive custom header
fi
if [[ "$pWaitForFooResponse" = true ]]; then
echo "So DO wait..."
Response=$(curl -k --write-out %{http_code} --header "$text102Header" --header "Content-Type:application/json" --silent --connect-timeout $pCURLConnectTimeout --output ./tmpResult.html -X POST --data "$pPayload" "$pFooUrl" 2>&1 | tee ./tmpResult.txt)
echo "HTTP Response=$Response"
echo "$(cat ./tmpResult.txt)"
if [ "${Response:0:1}" -eq "1" ] || [ "${Response:0:1}" -eq "2" ]; then #if HTTP Response start by 1 or 2 (10x - 20x)...
echo "Adapter sucessfully triggered."
return 0
else
# cat ./tmpResult.html 2>/dev/null
#cat ./tmpResult.txt 2>/dev/null
echo
echo "HTTP error trying to trigger adapter."
return 1
fi
else
echo "So DO NOT wait..."
curl -k --write-out %{http_code} --header "$text102Header" --header "Content-Type:application/json" --silent --connect-timeout $pCURLConnectTimeout --output ./tmpResult.html -X POST --data "$pPayload" "$pFooUrl" > /dev/null 2>&1 &
echo "Adapter sucessfully (hopefully) triggered. NOT reading HTTP response until Foo code is upgraded to respond directly a HTTP 200 Successfully queued or similar."
return 0
fi
}
clear
export http_proxy="http://1.1.1.256:3128/"
export https_proxy="http://1.1.1.256:3128/"
export no_proxy="foo.com"
# Main
clear
echo "STEP 09- Triggering Foo Internal Adapters."
echo "Proxy settings:"
env | grep proxy
function_triggerFooAdapter "http://foo.com/lookups/trigger_foo" "" 600 true true
Run it manually and CHECK what curl -v is sending as the headers; I would expect to see something like
> POST /the/url HTTP/1.1
> Host: thehost.com:80
> User-Agent: curl/7.51.0
> Accept: */*
> Expect: 102-Processing
... some stuff skipped.
If you're not sending the Expect header; then curl is in fact doing the right thing..
Related
i wrote a script to create a github pull request by an api. This is part of a gitflow action to merge a hotfix automatically into the develop. Anyway,..
i always got the error 422 Validation failed, or the endpoint has been spammed.
Github API Docu
i cant handle why and what could be wrong.
You can see my code below. May some of you can see my problem.
function create_pr()
{
TITLE="hotfix auto merged by $(jq -r ".pull_request.head.user.login" "$GITHUB_EVENT_PATH" | head -1)."
REPO_FULLNAME=$(jq -r ".repository.full_name" "$GITHUB_EVENT_PATH")
RESPONSE_CODE=$(curl -o $OUTPUT_PATH -s -w "%{http_code}\n" \
--data "{\"title\":\"$TITLE\", \"head\": \"$BASE_BRANCH\", \"base\": \"$TARGET_BRANCH\"}" \
-X POST \
-H "Authorization: Bearer $GITHUB_TOKEN" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/repos/$REPO_FULLNAME/pulls")
echo "head: $SOURCE_BRANCH, base: $TARGET_BRANCH"
echo "Create PR Response:"
echo "Code : $RESPONSE_CODE"
if [[ "$RESPONSE_CODE" -ne "201" ]];
then
echo "Could not create PR";
exit 1;
else echo "Created PR";
fi
}
I am trying to capture the response of an http request in a variable. The following question gave me the perfect solution to this (How to evaluate http response codes from bash/shell script?). When I execute this command locally
response=$(curl --write-out '%{http_code}' --silent --output /dev/null http://localhost:8082/url)
echo $response
It gives me the wanted http code (e.g. 400). However in Jenkins I execute the same command, but it gives me an empty response:
sh '''#!/bin/bash
DOCKER_RESPONSE=$(curl --write-out '%{http_code}' --silent --output /dev/null http://localhost:8002/route?json={})
while [[ $DOCKER_RESPONSE != 200 ]]
do
sleep 1
echo "$DOCKER_RESPONSE"
DOCKER_RESPONSE=$(curl --write-out '%{http_code}' --silent --output /dev/null http://localhost:8002/route?json={})
done
'''
you are mixing groovy syntex with bash , it has to be like below
node {
stage('one') {
sh """
res=\$(curl --write-out '%{http_code}' --silent --output /dev/null https://google.com)
echo \$res
while [[ \$res != '301' ]]
do
sleep 1
echo \$res
res=\$(curl --write-out '%{http_code}' --silent --output /dev/null https://google.com)
done
"""
}
}
and The output will be
Hi i am testing web services using shell script by having multiple if condition, with the shell script coding i am getting success count, failure count and failure reason
success=0
failure=0
if curl -s --head --request DELETE http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "DeleteUser is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com > f1.txt
echo "getUserDetails is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s -i -X POST -H "Content-Type:application/json" http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st","city":"san jose","state":"CA","zip":"989898","country":"United States"}' | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "addProjectLocationAddress is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
echo $success Success
echo $failure failure
but i am looking forward to test the web services from a file like i have file called web_services.txt which contains all my web services using shell script how do i execute and success count, failure count and failure reason
web_services.txt
All are different calls delete,get and post
http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com
http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com
http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st"
,"city":"san jose","state":"CA","zip":"989898","country":"United States"}'
First of all, your current code does not correctly deal with empty lines. You need to skip those.
Your lines already contain shell commands. Running curl on them makes no sense. Instead, you should evaluate these commands.
Then, you need to modify curl so that it reports whether the request was successful by adding -f:
FILE=D:/WS.txt
success=0
failure=0
while read LINE; do
if test -z "$LINE"; then
continue
fi
if eval $(echo "$LINE" | sed 's/^curl/curl -f -s/') > /dev/null; then
success=$((success+1))
else
echo $LINE >> aNewFile.txt
failure=$((failure+1))
fi
done < $FILE
echo $success Success
echo $failure failure
I want to extend this example webserver shell script to handle multiple requests. Here is the example source:
#!/bin/sh
# based on https://debian-administration.org/article/371/A_web_server_in_a_shell_script
base=/srv/content
while /bin/true
do
read request
while /bin/true; do
read header
[ "$header" == $'\r' ] && break;
done
url="${request#GET }"
url="${url% HTTP/*}"
filename="$base$url"
if [ -f "$filename" ]; then
echo -e "HTTP/1.1 200 OK\r"
echo -e "Content-Type: `/usr/bin/file -bi \"$filename\"`\r"
echo -e "\r"
cat "$filename"
echo -e "\r"
else
echo -e "HTTP/1.1 404 Not Found\r"
echo -e "Content-Type: text/html\r"
echo -e "\r"
echo -e "404 Not Found\r"
echo -e "Not Found
The requested resource was not found\r"
echo -e "\r"
fi
done
Wrapping the code in a loop is insufficient because the browser doesn't render anything. How can I make this work ?
Application-specific reasons make launching the script per-request an unsuitable approach.
A TCP listener is required to accept browser connections and connect them to the script. I used socat to do this:
$ socat EXEC:./webserver TCP4-LISTEN:8080,reuseaddr,fork
This gives access to the server by pointing a browser at http://localhost:8080.
The browser needs to know how much data to expect, and it won't render anything until
it gets that data or the connection is closed by the server.
The HTTP response should include a Content-Length header or it should use a *chunked
* Transfer-Encoding.
The example script does not do that. However, it works because it processes a single request
and exits which causes the connection to close.
So, one way to solve the problem is to set a Content-Length header. Here is an example that works:
#!/bin/sh
# stdio webserver based on https://debian-administration.org/article/371/A_web_server_in_a_shell_script
respond_with() {
echo -e "HTTP/1.1 200 OK\r"
echo -e "Content-Type: text/html\r"
echo -e "Content-Length: ${#1}\r"
echo -e "\r"
echo "<pre>${1}</pre>"
echo -e "\r"
}
respond_not_found() {
content='<h1>Not Found</h1>
<p>The requested resource was not found</p>'
echo -e "HTTP/1.1 404 Not Found\r"
echo -e "Content-Type: text/html\r"
echo -e "Content-Length: ${#content}\r"
echo -e "\r"
echo "${content}"
echo -e "\r"
}
base='/var/www'
while /bin/true; do
read request
while /bin/true; do
read header
[ "$header" == $'\r' ] && break;
done
url="${request#GET }"
url="${url% HTTP/*}"
filename="$base/$url"
if [ -f "$filename" ]; then
respond_with "$(cat $filename)"
elif [ -d "$filename" ]; then
respond_with "$(ls -l $filename)"
else
respond_not_found
fi
done
Another solution is to make the script trigger the connection close. One way to do this is to send an escape code that socat can interpret as EOF.
For example, add a BELL character code (ASCII 7, \a) to the end of the response:
echo -e '\a'
and tell socat to interpret it as EOF:
$ socat EXEC:./webserver,escape=7 TCP4-LISTEN:8080,reuseaddr,fork
Any usually unused character will do, BELL is just an example.
Although the above will work, HTTP should really contain a content type or transfer encoding header. This alternative method may be useful if using a similar technique to serve arbitrary (non-HTTP) requests from a script.
I am trying to do a CURL with an IF Else condition. On success of the call Print a successful message or else Print the call failed.
My Sample Curl would look like:
curl 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066' > HTML_Output.html
I want to do the same thing using Shell.
Using JavaScript:
if(res.status === 200){console.log("Yes!! The request was successful")}
else {console.log("CURL Failed")}
Also, I see the CURL percentage, but I do not know, how to check the percentage of CURL. Please help.
CURL output
You can use the -w (--write-out) option of curl to print the HTTP code:
curl -s -w '%{http_code}\n' 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066'
It will show the HTTP code the site returns.
Also curl provides a whole bunch of exit codes for various scenarios, check man curl.
One way of achieving this like,
HTTPS_URL="https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066"
CURL_CMD="curl -w httpcode=%{http_code}"
# -m, --max-time <seconds> FOR curl operation
CURL_MAX_CONNECTION_TIMEOUT="-m 100"
# perform curl operation
CURL_RETURN_CODE=0
CURL_OUTPUT=`${CURL_CMD} ${CURL_MAX_CONNECTION_TIMEOUT} ${HTTPS_URL} 2> /dev/null` || CURL_RETURN_CODE=$?
if [ ${CURL_RETURN_CODE} -ne 0 ]; then
echo "Curl connection failed with return code - ${CURL_RETURN_CODE}"
else
echo "Curl connection success"
# Check http code for curl operation/response in CURL_OUTPUT
httpCode=$(echo "${CURL_OUTPUT}" | sed -e 's/.*\httpcode=//')
if [ ${httpCode} -ne 200 ]; then
echo "Curl operation/command failed due to server return code - ${httpCode}"
fi
fi
Like most programs, curl returns a non-zero exit status if it gets an error, so you can test it with if.
if curl 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066' > HTML_Output
then echo "Request was successful"
else echo "CURL Failed"
fi
I don't know of a way to find out the percentage if the download fails in the middle.