I tried to wrote a Bash script to manage cURL download errors by parsing the response header. However, with all Github downloads, after the redirect the header is HTTP/1.1 403 Forbidden, despite the fact that the download works.
function curldown {
url="$1"
code=$(curl -LI $url | awk '/^HTTP/{a=$2} END{print a}')
if [[ "${code:0:1}" == "4" ]]; then
echo "Error $code"
else
curl -JOL $url > /tmp/curldown
fi
}
url='https://github.com/keybase/client/releases/download/v5.1.1/keybase-v5.1.1.tar.xz'
curldown $url
# Error 403
but
curl -JOL $url
gives a working output.
Any fix?
A better solution :
http_code=$(curl -Ls "$url" -w '%{http_code}\n' -o /dev/null)
if ((http_code >= 400)); then
echo >&2 "Error detected; HTTP code $http_code"
fi
Related
My shell script looks like this:
#!/bin/bash
curl -f -T /home/skript_1.txt -u XXX:XXXXXXX! -k http://192.168.0.100/home/test.txt
res=$?
if test "$res" != 0; then
echo "the curl command failed with: $res"
else
echo "Success $res"
fi
I use this to ulpad a file...
Now my problem is, that I can't get all errors.
As an example if I enter a wrong URL (the right URL would be http://192.168.0.100:5005/home/test.txt), the upload fails, but the exit code still is 0.
Here is the output with a wrong URL:
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
Success 0
How can I get those errors as well?
I also tried the same thing with cURL and and ftp target, there it works with all errors.
-w 'http_code %{http_code}' will make curl add the HTTP status code at the end of the output.
Maybe you could go for this new version, which I only partially tested:
#!/bin/bash
serverResponse=$(curl -f -w 'http_code %{http_code}' -T /home/skript_1.txt -u XXX:XXXXXXX! -k http://192.168.0.100/home/test.txt)
res=$?
if test "$res" != 0; then
printf "the curl command failed with: %s\n" "${res}"
else
http_code="${serverResponse##*http_code }"
if [[ ! -z "${http_code}" && "${http_code}" -ne 200 ]] ; then
printf "Server sent back this http status: %s\n" "${http_code}"
else
printf "Success %s\n" "${res}"
fi
fi
Hi i am testing web services using shell script by having multiple if condition, with the shell script coding i am getting success count, failure count and failure reason
success=0
failure=0
if curl -s --head --request DELETE http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "DeleteUser is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com > f1.txt
echo "getUserDetails is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s -i -X POST -H "Content-Type:application/json" http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st","city":"san jose","state":"CA","zip":"989898","country":"United States"}' | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "addProjectLocationAddress is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
echo $success Success
echo $failure failure
but i am looking forward to test the web services from a file like i have file called web_services.txt which contains all my web services using shell script how do i execute and success count, failure count and failure reason
web_services.txt
All are different calls delete,get and post
http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com
http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com
http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st"
,"city":"san jose","state":"CA","zip":"989898","country":"United States"}'
First of all, your current code does not correctly deal with empty lines. You need to skip those.
Your lines already contain shell commands. Running curl on them makes no sense. Instead, you should evaluate these commands.
Then, you need to modify curl so that it reports whether the request was successful by adding -f:
FILE=D:/WS.txt
success=0
failure=0
while read LINE; do
if test -z "$LINE"; then
continue
fi
if eval $(echo "$LINE" | sed 's/^curl/curl -f -s/') > /dev/null; then
success=$((success+1))
else
echo $LINE >> aNewFile.txt
failure=$((failure+1))
fi
done < $FILE
echo $success Success
echo $failure failure
I am trying to do a CURL with an IF Else condition. On success of the call Print a successful message or else Print the call failed.
My Sample Curl would look like:
curl 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066' > HTML_Output.html
I want to do the same thing using Shell.
Using JavaScript:
if(res.status === 200){console.log("Yes!! The request was successful")}
else {console.log("CURL Failed")}
Also, I see the CURL percentage, but I do not know, how to check the percentage of CURL. Please help.
CURL output
You can use the -w (--write-out) option of curl to print the HTTP code:
curl -s -w '%{http_code}\n' 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066'
It will show the HTTP code the site returns.
Also curl provides a whole bunch of exit codes for various scenarios, check man curl.
One way of achieving this like,
HTTPS_URL="https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066"
CURL_CMD="curl -w httpcode=%{http_code}"
# -m, --max-time <seconds> FOR curl operation
CURL_MAX_CONNECTION_TIMEOUT="-m 100"
# perform curl operation
CURL_RETURN_CODE=0
CURL_OUTPUT=`${CURL_CMD} ${CURL_MAX_CONNECTION_TIMEOUT} ${HTTPS_URL} 2> /dev/null` || CURL_RETURN_CODE=$?
if [ ${CURL_RETURN_CODE} -ne 0 ]; then
echo "Curl connection failed with return code - ${CURL_RETURN_CODE}"
else
echo "Curl connection success"
# Check http code for curl operation/response in CURL_OUTPUT
httpCode=$(echo "${CURL_OUTPUT}" | sed -e 's/.*\httpcode=//')
if [ ${httpCode} -ne 200 ]; then
echo "Curl operation/command failed due to server return code - ${httpCode}"
fi
fi
Like most programs, curl returns a non-zero exit status if it gets an error, so you can test it with if.
if curl 'https://xxxx:1234xxxx#abc.dfghj.com/xl_template.get_web_query?id=1035066' > HTML_Output
then echo "Request was successful"
else echo "CURL Failed"
fi
I don't know of a way to find out the percentage if the download fails in the middle.
I'm writing a script to check whether a file can be downloaded from a remote server using curl. The script will report if there is a problem downloading the file or the file is not present. I'm using the below code snippet:
curl --fail -u "$USERNAME:$PASSWORD" --remote-name "$HOST/$FILEPATH"
if [ $? -ne 0 ]
then
echo "There is some error in Downloading file from $HOST";
else
echo "Download success";
fi
But I always get the exit status code as 0. That's because curl always downloads the file ( even when the file is not present) and you can find the actual error message in that file.
So how can I get the real curl exit status code ( 22 file not found) ?
--fail will return an error code of 22 in case of http protocol and
the return code may be different for other protocols say ftp. See below :
me#udistro:~$ curl --fail ftp://ftp.redhat.com/redhat/brms/6.2.0/en/source/MD5
curl: (78) RETR response: 550
me#udistro:~$ echo $?
78
But as the man page states :
This method is not fail-safe and there are occasions where
non-successful response codes will slip through, especially when
authentication is involved (response codes 401 and 407).
As a starting point you could try this :
curl --fail -u "$USERNAME:$PASSWORD" --remote-name "$HOST/$FILEPATH"
var=$?
if [ $var -ne 78 ] && [ $var -ne 22 ]
then
echo "Success"
else
echo "Failed"
fi
I have a simple script that accepts 2 arguments, a URL and a log file location. Theoretically, it should capture the header status code from the curl command and if it is a 404, then append the URL to the log file. Any idea where it is failing?
#!/bin/bash
CMP='HTTP/1.1 404 Not Found' # This is the 404 Pattern
OPT=`curl --config /var/www/html/curl.cnf -s -D - "$1" -o /dev/null | grep 404` # Status Response
if [ $OPT = $CMP ]
then
echo "$1" >> "$2" # Append URL to File
fi
Your test is assigning the value of $CMP to $OPT, not comparing for equality. Try the following simpler method, which checks the return code of the grep command rather than looking for the comparison string in its output:
#!/bin/bash
CMP='HTTP/1.1 404 Not Found'
if $(curl -s -I "$1" | grep "$CMP" >/dev/null 2>&1); then
echo "$1" >> "$2"
fi