Check if url returns 200 using bash - bash

I need to check if the remote file exists based on the url response by doing:
curl -u myself:XXXXXX -Is https://mylink/path/to/file | head -1
What can give something like these:
'HTTP/1.1 200 OK
'
or
'HTTP/1.1 404 Not Found
'
Now, I want to extract the http status code like 200 from the resulting string above and assign the number to a variable. How can I do that?

Use the -o option to send the headers to /dev/null, and use the -w option to output only the status.
$ curl -o /dev/null -u myself:XXXXXX -Isw '%{http_code}\n' https://mylink/path/to/file
200
$
If you intended to capture the status to a variable, you can omit the newline from the format.
$ status=$(curl ... -o /dev/null -Isw '%{http_code}' ...)

Use grep:
curl -u myself:XXXXXX -Is https://mylink/path/to/file | head -1 | grep -o '[0-9][0-9][0-9]'

Nice and simple:
curl --output /dev/null --silent --head --fail http://google.com

Related

How to print out content-security-policy

I have tried this command
curl -Ik https://dev.mydomain.com/
and it does print everything. And now what I want is to print out content-security-policy only.
Do I need to use jq or is there any other helpful tool that I can use?
curl -sIk https://stackoverflow.com/ | grep content-security-policy | cut -d ' ' -f 2-
Will curl the url, grep only the line with content-security-policy, cut on a space, and get all the fields from 2 onwards.
Example:
➜ ~ curl -sIk https://stackoverflow.com/ | grep content-secur | cut -d ' ' -f 2-
upgrade-insecure-requests; frame-ancestors 'self' https://stackexchange.com
If you use cURL >= 7.84.0, you can use the syntax %header{name} :
curl -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"
If you want to try it without installing a new version, you can run the Docker image :
docker run --rm curlimages/curl:7.85.0 -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"

Bash curl call second command only when status is not 200

I looking for solution how combine two curl request in bash, and call second curl only when first doesnt return status 200.
I tried:
curl -s "https://example.com/first" || curl -s "https://example.com/second"
but it still call both because first curl is successful if return for example status 404.
How it is possible call second only when first doesnt return status 200?
Thanks for help.
curl -s -o /dev/null -w "%{http_code}" https://example.com | grep -q "^200$" || curl -s https://example.com/2.html
Edit: added improvement by #tripleee to not pollute output with grep output.

POST using Bash in Script

I am currently writing a script which gets a cookie and unique URL from a website, then uses that cookie and URL to post login/password information.
I am having trouble in getting this to work, some of the runs I have done gives me the message "switching from POST to GET" and I can't understand why.
Can someone please help me?
for i in $(cat accounts.txt); do
rm out.out
curl -L -b cookies.txt -c cookies.txt https://website.com | grep -Eo "(http|https)://cookies.website.com/idp[a-zA-Z0-9./?=_-]*" >> out.out
url=`head -1 out.out`
content="$(curl -X POST -L -v -b cookies.txt -d "$i" "$url" )"
echo "$url" "$i" "$content"
done

Bash script and curl: how to get status code and content and store in var in same command

In PHP or Python, for example, you cand do curl and receive a object with all properties organized.
In bash I have to invoke curl application.
I need to get status code and content in same command (its obvious because the status code is relationed with success or not to get content). But how I can do this?
I am trying this:
# get url in function
urlcurl=$(echo $#)
# create new file descriptor and redirect to STDOUT
exec 3>&1
# get curl status code and store in curlstatuscode
curlstatuscode=$(curl -L -k -w "%{http_code}" -o >(cat >&3) --silent \"${urlcurl}\")
My problem is about content: When I execute this in terminal I receive content in STDOUT (that is the command had to do). But I am trying to store this STDOUT using regular redirect expressions and they not work.
One example:
exec 3>&1
HTTP_STATUS=$(curl -k --silent -L -w "%{http_code}" -o >(cat >&3) 'http://example.com')
echo $HTTP_STATUS
If you not understood what I am trying to do, roughly mode (and invalid) would be something like that:
(I know that is invalid, I only want to clarify)
HTTP_CONTENT=$(echo `HTTP_STATUS=$(curl -k --silent -L -w "%{http_code}" -o >(cat >&3) 'http://example.com'`)
HTTP_CONTENT will get get content and HTTP_STATUS will get curl status code.
Please do not say to user another language. I need to solve this in bash. Is very simple to do that in other languages (mainly oriented objects). I really want to do this in bash.
Thank you!
Since http status is the last line in curl output, you can do like this in BASH:
out=$(curl -k --silent -L -w "\n%{http_code}" 'minecraft.net/haspaid.jsp?user=apterixbr')
http_status="${out##*$'\n'}"
http_content="${out%$'\n'*}"

Script to get the HTTP status code of a list of urls?

I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.
I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.
EDIT Note that there is an issue if the page says "404 not found" but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.
For more on this, see Check if a URL goes to a page containing the text "404"
Curl has a specific option, --write-out, for this:
$ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
200
-o /dev/null throws away the usual output
--silent throws away the progress meter
--head makes a HEAD HTTP request, instead of GET
--write-out '%{http_code}\n' prints the required status code
To wrap this up in a complete Bash script:
#!/bin/bash
while read LINE; do
curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < url-list.txt
(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)
wget --spider -S "http://url/to/be/checked" 2>&1 | grep "HTTP/" | awk '{print $2}'
prints only the status code for you
Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.
Here the code:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst
-n1: use just one value (from the list) as argument to the curl call
-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)
Check the write_out parameter in the manual of curl for more data you can extract using it (times, etc).
In case it helps someone this is the call I'm currently using:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv
It just outputs a bunch of data into a csv file that can be imported into any office tool.
This relies on widely available wget, present almost everywhere, even on Alpine Linux.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
The explanations are as follow :
--quiet
Turn off Wget's output.
Source - wget man pages
--spider
[ ... ] it will not download the pages, just check that they are there. [ ... ]
Source - wget man pages
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
Source - wget man pages
What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.
The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :
the second ($2) non-blank group of characters: {$2}
on the very first line of the header: NR==1
And because we want to print it... {print $2}.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
Use curl to fetch the HTTP-header only (not the whole file) and parse it:
$ curl -I --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
200
wget -S -i *file* will get you the headers from each url in a file.
Filter though grep for the status code specifically.
I found a tool "webchk” written in Python. Returns a status code for a list of urls.
https://pypi.org/project/webchk/
Output looks like this:
▶ webchk -i ./dxieu.txt | grep '200'
http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)
Hope that helps!
Keeping in mind that curl is not always available (particularly in containers), there are issues with this solution:
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
which will return exit status of 0 even if the URL doesn't exist.
Alternatively, here is a reasonable container health-check for using wget:
wget -S --spider -q -t 1 "${url}" 2>&1 | grep "200 OK" > /dev/null
While it may not give you exact status out, it will at least give you a valid exit code based health responses (even with redirects on the endpoint).
Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:
cat url.lst |
parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile
In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.

Resources