How to use curl -w switch with multiple data token as format parameter? - bash

I want to get two things from curl: http_code and time_total from a single curl request. How should I formulate the -w %{insert_formatting_here} ?
These works:
result = $(curl -s -w %{http_code} -o temp.txt) "http://127.0.0.1"
echo "$result"
result = $(curl -s -w %{time_total} -o temp.txt) "http://127.0.0.1"
echo "$result"
Result:
200
0.004
But this didn't work as I expected:
result = $(curl -s -w %{http_code time_total} -o temp.txt) "http://127.0.0.1"
echo "$result"
Result:
<p>where "$CATALINA_HOME" is the root of the Tomcat installation directory. If you're seeing this page, and you don't think you should be, then you're either a user who has arrived at new installation of Tomcat, or you're an administrator who hasn't got his/her setup quite right. Providing the latter is the case, please refer to the Tomcat Documentation for more detailed setup and administration in %{http_codeReserved99-2014 Apache Software Foundation<br/>ht="80" alt="Powered by Tomcat"/><br/>s working on Tomcat</li>configuring and using Tomcat</li> developing web applications.</p>
I cannot find any tutorial that helps me to put multiple token on the format parameter. They only list the format token, but there's no example or anything so far.

Each placeholder needs to be in brackets, i.e.:
curl -s -w "%{http_code}:%{time_total}" http://127.0.0.1

Related

Login via curl fails inside bash script, same curl succeeds on command line

I'm running this login via curl in my bash script. I want to make sure I can login before executing the rest of the script, where I actually log in and store the cookie in a cookie jar and then execute another curl in the API thousands of times. I don't want to run all that if I've failed to login.
Problem is, the basic login returns 401 when it runs inside the script. But when I run the exact same curl command on the command line, it returns 200!
basic_login_curl="curl -w %{http_code} -s -o /dev/null -X POST -d \"username=$username&password=$password\" $endpoint/login"
echo $basic_login_curl
outcome=`$basic_login_curl`
echo $outcome
if [ "$outcome" == "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
This outputs:
curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
401
Failed login. Please try again.
Copied the output and ran it on the cmd line:
$ curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
200$
Any ideas? LMK if there's more from the code you need to see.
ETA: Please note: The issue's not that it doesn't match 401, it's that running the same curl login command inside the script fails to authenticate, whereas it succeeds when I run it on the actual CL.
Most of the issues reside in how you are quoting/not quoting variables and the subshell execution. Setting up your command like the following is what I would recommend:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
The rest basically involves quoting everything properly:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
# echo "$basic_login_curl" # not needed since what follows repeats it.
outcome="$basic_login_curl"
echo "$outcome"
if [ "$outcome" = "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
Running the script through shellcheck.net can be helpful in resolving issues like this as well.

If proxy is down, get a new one

I'm writing my first bash script
LANG="en_US.UTF8" ; export LANG
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt"
This script also use curl, but if proxy is down it gives me this error:
failed to connect PROXY:PORT
How can I make bash script run again, so it can get another proxy address from proxy.txt
Thanks in advance
Run it in a loop until the curl succeeds, for example:
export LANG="en_US.UTF8"
while true; do
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
Notice the && break at the end of the curl command.
That is, if the curl succeeds, break out of the infinite loop.
If you have multiple curl commands and you need all of them to succeed,
then chain them all together with &&, and add the break after the last one:
curl url1 && \
curl url2 && \
break
Lastly, as #Inian pointed out,
you could use the --proxy flag to pass a proxy URL to curl without the extra step of setting https_proxy, for example:
curl --proxy "$(shuf -n 1 proxy.txt)" --data "mydata${RUID}" --user-agent "myuseragent"
Lastly, note that due to the randomness, a randomly selected proxy may come up more than once until you find one that works.
Avoid that, you could read iterate over the shuffled proxies instead of an infinite loop:
export LANG="en_US.UTF8"
shuf proxy.txt | while read -r proxy; do
ruid=$(php -f randuid.php)
curl --proxy "$proxy" --data "mydata${ruid}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
I also lowercased your user-defined variables,
as capitalization is not recommended for those.
I know i accepted #janos answer but since I can't edit his I'm going to add this
response=$(curl --proxy "$proxy" --silent --write-out "\n%{http_code}\n" https://myurl.com/url)
status_code=$(echo "$response" | sed -n '$p')
html=$(echo "$response" | sed '$d')
case "$status_code" in
200) echo 'Working!'
;;
*)
echo 'Not working, trying again!';
exec "$0" "$#"
esac
This will run my script again if it gives 503 status code which i wanted :)
And with #janos code it will run again if proxy is not working.
Thank you everyone i achieved what i wanted.

How to search for a string in a text file and perform a specific action based on the result

I have very little experience with Bash but here is what I am trying to accomplish.
I have two different text files with a bunch of server names in them. Before installing any windows updates and rebooting them, I need to disable all the nagios host/service alerts.
host=/Users/bob/WSUS/wsus_test.txt
password="my_password"
while read -r host
do
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
This is a reduced form of my current code which works as intended, however, we have servers in a bunch of regions. Each server name is prepended with a 3 letter code based on region (ie, LAX, NYC, etc). Secondly, we have a nagios server in each region so I need the code above to be connecting to the correct regional nagios server based on the server name being passed in.
I tried adding 4 test servers into a text file and just adding a line like this:
if grep lax1 /Users/bob/WSUS/wsus_text.txt; then
<same command as above but with the regional nagios server name>
fi
This doesn't work as intended and nothing is actually disabled/enabled via API calls. Again, I've done very little with Bash so any pointers would be appreciated.
Extract the region from host name and use it in the Nagios URL, like this:
while read -r host; do
region=$(cut -f1 -d- <<< "$host")
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios-$region.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1

Why is this bash/CURL call to REST services giving inconsistent results with parameters?

I have written a smoke-testing script that uses BASH script & Curl to test RESTful web services we're working on. The script reads a file, and interprets each line as a URL suffix and parameters for a Curl REST call.
Unfortunately, the script gives unexpected results when I adapted it to run HTTP POST calls as well as GET calls. It does not give the same results running the command on its own, vs. in script:
The BASH Script:
IFS=$'\n' #Don't split an input URL line at spaces
RESTHOST='hostNameAndPath' #Can't give this out
URL="/activation/v2/activationInfo --header 'Content-Type:Application/xml'"
URL2="/activation/v2/activationInfo"
OUTPUT=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL -d #"./activation_post.txt" -X POST`
echo 'out:' $OUTPUT
OUTPUT2=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL2 --header 'Content-Type:Application/xml' -d #'./activation_post.txt' -X POST`
echo 'out2:' $OUTPUT2
Results Out:
out: 505
out2: 200
So, the first call fails (HTTP return code 505, HTTP Version Not Supported), and the second call succeeds (return code "OK").
Why does the first call fail, and how do I fix it? I've verified they should execute the same command (evaluating in echo). I am sure there is something basic I'm missing, as I am just NOW learning Bash scripting.
I think I have found the problem! It is caused by IFS=$'\n'! Because of this, variable expansion does not work as expected. It does not let to split the arguments specified in the URL string!
As a result the SERVER_PROTOCOL variable on the server side will be set to '--header Content-Type:Application/xml HTTP/1.1' instead of "HTTP/1.1", and the CONTENT_TYPE will be 'application/x-www-form-urlencoded' instead of 'Application/xml'.
To show the root of the problem in detail:
VAR="Solaris East"
printf "+%s+ " $VAR
echo "==="
IFS=$'\n'
printf "+%s+ " $VAR
Output:
+Solaris+ +East+ ===
+Solaris East+
So the $VAR expansion does not work as expected because of IFS=$'\n'!
Solution: Do not use IFS=$'\n' and replace space to %20 in URL!
URL=${URL2// /%20}" --header Content-Type:Application/xml"
In this case your first curl call will work properly!
If You still use IFS=$'\n' and give --header option in the command line it will not work properly if URL contains a space, because the server will fail to process it (I tested on apache)!
Even You still cannot use HEADER="--header Content-Type:Application/xml" as expanding $HEADER will result one(!) argument for curl, namely --header Content-Type:Application/xml instead of splitting them into two.
So I may suggest to replace spaces in URL to %20 anyway!
The single quotes surrounding Content-Type:Application/xml, because they are quoted in the value of URL are treated as literal quotes and not removed when $URL is expanded in that call to curl. As a result, you are passing an invalid HTTP header. Just use
URL="/activation/v2/activationInfo --header Content-Type:Application/xml"
OUTPUT=`curl -sL -m 30 -w "%{http_code}" -o /dev/null $RESTHOST$URL -d #"./activation_post.txt" -X POST`
However, it's not a great idea to rely on word-splitting like this to combine two separate pieces of the call to curl in a single variable. Try something like this instead:
URLPATH="activation/v2/activationInfo"
HEADERS=("--header" "Content-Type:Application/xml")
OUTPUT=$( curl -SL -m 30 -w "%{http_code}" -o /dev/null "$RESTHOST/$URL" "${HEADERS[#]}" -d #'./activation_post.txt' -X POST )

Script to get the HTTP status code of a list of urls?

I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.
I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.
EDIT Note that there is an issue if the page says "404 not found" but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.
For more on this, see Check if a URL goes to a page containing the text "404"
Curl has a specific option, --write-out, for this:
$ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
200
-o /dev/null throws away the usual output
--silent throws away the progress meter
--head makes a HEAD HTTP request, instead of GET
--write-out '%{http_code}\n' prints the required status code
To wrap this up in a complete Bash script:
#!/bin/bash
while read LINE; do
curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < url-list.txt
(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)
wget --spider -S "http://url/to/be/checked" 2>&1 | grep "HTTP/" | awk '{print $2}'
prints only the status code for you
Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.
Here the code:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst
-n1: use just one value (from the list) as argument to the curl call
-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)
Check the write_out parameter in the manual of curl for more data you can extract using it (times, etc).
In case it helps someone this is the call I'm currently using:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv
It just outputs a bunch of data into a csv file that can be imported into any office tool.
This relies on widely available wget, present almost everywhere, even on Alpine Linux.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
The explanations are as follow :
--quiet
Turn off Wget's output.
Source - wget man pages
--spider
[ ... ] it will not download the pages, just check that they are there. [ ... ]
Source - wget man pages
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
Source - wget man pages
What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.
The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :
the second ($2) non-blank group of characters: {$2}
on the very first line of the header: NR==1
And because we want to print it... {print $2}.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
Use curl to fetch the HTTP-header only (not the whole file) and parse it:
$ curl -I --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
200
wget -S -i *file* will get you the headers from each url in a file.
Filter though grep for the status code specifically.
I found a tool "webchk” written in Python. Returns a status code for a list of urls.
https://pypi.org/project/webchk/
Output looks like this:
▶ webchk -i ./dxieu.txt | grep '200'
http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)
Hope that helps!
Keeping in mind that curl is not always available (particularly in containers), there are issues with this solution:
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
which will return exit status of 0 even if the URL doesn't exist.
Alternatively, here is a reasonable container health-check for using wget:
wget -S --spider -q -t 1 "${url}" 2>&1 | grep "200 OK" > /dev/null
While it may not give you exact status out, it will at least give you a valid exit code based health responses (even with redirects on the endpoint).
Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:
cat url.lst |
parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile
In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.

Resources