In http response there can be header Strict-Transport-Security. I was sure that it must be written in Train-Case, like it is on dropbox.com:
$ curl --silent --head https://dropbox.com | grep -i strict
Strict-Transport-Security: max-age=15552000; includeSubDomains
But on one site I saw it written in kebab-case (this site is not publicly accessable, thats why I don't give link to it):
$ curl --silent --head https://... | grep -i strict
strict-transport-security: max-age=31536000; includeSubDomains
Is it correct to use all lower case letters in Strict-Transport-Security header?
The HTTP specification RFC 7230 section 3.2 says header names are case-insensitive. So you can send them as lower case if you like.
However it is traditional to send them using the specification documents casing. If only to make life easier for people troubleshooting the traffic.
Related
I have a script for testing the status of a site, to then run a command if it is offline. However, I've since realised because the site is proxied through Cloudflare, it always shows the 200 status, even if the site is offline. So I need to come up with another approach. I tried testing the site using curl and HEAD. Both get wrong response (from Cloudflare).
What I have found is that HTTPie command gets the response I need. Although only when I use the -h option (I have no idea why that makes a difference, since visually the output looks identical to when I don't use -h).
Assuming this is an okay way to go about reaching my aim ... I'd like to know how I can test if a certain string appears more than 0 times.
The string is location: https:/// (with three forward slashes).
The command I use to get the header info from the actual site (and not simply from what Cloudflare is dishing up) is, http -h https://example.com/.
I am able to test for the string using, http -h https://example.com | grep -c 'location: https:///'. This will output 1 when the string exists.
What I now want to do is run a command if the output is 1. But this is where I need help. My bash skills are minimal, and I am going about it the wrong way. What I came up with (which doesn't work) is:
#!/bin/bash
STR=$(http -h https://example.com/)
if (( $(grep -c 'location: https:///' $STR) != 1 )); then
echo "Site is UP"
exit
else
echo "Site is DOWN"
sudo wo clean --all && sudo wo stack reload --all
fi
Please explain to me why it's not working, and how to do this correctly.
Thank you.
ADDITIONS:
What the script is testing for is an odd situation in which the site suddenly starts redirecting to, literally, https:///. This obviously causes the site to be down. Safari, for instance, takes this as a redirection to localhost. Chrome simply spits the dummy with a redirect error, ERR_INVALID_REDIRECT.
When this is occurring, the headers from the site are:
HTTP/2 301
server: nginx
date: Thu, 12 May 2022 10:19:58 GMT
content-type: text/html; charset=UTF-8
content-length: 0
location: https:///
x-redirect-by: WordPress
x-powered-by: WordOps
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer, strict-origin-when-cross-origin
x-download-options: noopen
x-srcache-fetch-status: HIT
x-srcache-store-status: BYPASS
I choose to test for the string location: https:/// since that's the most specific (and unique) to this issue. Could also test for HTTP/2 301.
The intention of the script is to remedy the problem when it occurs, as a temporary solution whilst I figure out what's causing Wordpress to generate such an odd redirect. Also in case it happens whilst I am not at work, or sleeping. :-) I will have a cron job running the script every 5 mins, so at least the site is never down for longer than that.
grep reads a file, not a string. Also, you need to quote strings, especially if they might contain whitespace or shell metacharacters.
More tantentially, grep -q is the usual way to check if a string exists at least once. Perhaps see also Why is testing “$?” to see if a command succeeded or not, an anti-pattern?
I can see no reason to save the string in a variable which you only examine once; though if you want to (for debugging reasons etc) probably avoid upper case variables. See also Correct Bash and shell script variable capitalization
The parts which should happen unconditionally should be outside the condition, rather than repeated in both branches.
Nothing here is Bash-specific, so I changed the shebang to use sh instead, which is more portable and sometimes faster. Perhaps see also Difference between sh and bash
#!/bin/sh
if http -h https://example.com/ | grep -q 'location: https:///'
then
echo "Site is UP"
else
echo "Site is DOWN"
fi
sudo wo clean --all && sudo wo stack reload --all
For basic diagnostics, probably try http://shellcheck.net/ before asking for human assistance.
I'm adding a custom header in Asp.Net app:
context.Response.Headers.Add("X-Date", DateTime.Now.ToString());
context.Response.Redirect(redirectUrl, false);
When I'm using Fiddler I can see the "X-Date" header in the response.
I need to receive it by using bash.
I tried curl -i https://my.site.com and also wget -O - -o /dev/null --save-headers https://my.site.com with no success.
In both cases I see just the regular headers like: Content-Type, Server, Date, etc...
How I can receive the "X-Date" header?
Thanks,
Lev
protocol headers are different than file-headers (like http-header and tcp-header are different). When you create a protocol header you wiil need a server to resolve it and use the associated enviroment variables. Example ...
#!/bin/bash
# Apache - CGI
echo "text/plain"
echo ""
echo "$CONTENT_TYPE"
echo "$HTTP_ACCEPT"
echo "$SERVER_PROTOCOL"
When calling this script via web, The response ony my browser was...
text/html
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
HTTP/1.1
What you looking for are enviorment variables called $HTTP_ACCEPT, $CONTENT_TYPE and maybe $SERVER_PROTOCOL too.
My understanding was that curl -i and curl -I would return virtually the same results except that curl -i would return the standard output along with the header and curl -I would only return the header -- the header of both being the same. We've been doing some gzip and un-gzipped testing with Varnish and stumbled upon the oddity that curl -i shows X-Cache: HIT but curl -I returns X-Cache: MISS! How this is possible, I am unsure and that is precisely my question in this post.
Here are some more details that may or may not make a difference:
The URL is usually SSL enforced (https) but both HTTP and HTTPS have been tested to receive same results
The results are consistent
Is Varnish Running site says "Yes! Sort of"
curl sends different HTTP requests to the server (or Varnish in this case) when you use the -I option. Normally, curl will send a GET request, but when you specify -I, it sends HEAD instead (essentially telling the server to just send the header, not the actual content). I'm not particularly familiar with Varnish, but it appears to normally cache both GET and HEAD requests -- but in your case it might be configured to do something different, or the backend server may be triggering a difference... In any case, I'm pretty sure it's GET vs. HEAD that's making the cache respond differently with -i vs. -I.
did you check in different orders?
see: http://anothersysadmin.wordpress.com/2008/04/22/x-cache-and-x-cache-lookup-headers-explained/ for some details on X-Cache
For example, if I wanna issue a post request to the server. But the website requires the username and password to login first. How should I do these two operations?
If it's requires some programmatic username and password built into the web page, you'd need to submit what it expects for a user logging in, then capture the cookies you get, and then send those cookies back with your post. This can get involved if the login process involves multiple pages which are redirected to. curl can do this, but be prepared to spend some time on it.
To get the cookie being returned by the server, use curl -i to include headers. You can also add -L to automatically follow redirects (which you otherwise would have to do manually by retrieving the URI in the Location: field of an HTTP 301 or 302 response). Example:
curl -i -L stackoverflow.com > /tmp/so.html
grep -i 'Set-Cookie:' /tmp/so.html
Yields:
Set-Cookie: prov=31c24327-c0bf-474d-b504-fc97dc69ab61; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
(Until you get the predictable logic right and how you need to submit the requests, you'll need to inspect the rest of the headers to be able to accomodate redirects, see if there are multple cookies, etc.)
To submit a cookie, use curl -b:
curl -b "prov=31c24327-c0bf-474d-b504-fc97dc69ab61" [rest of curl command]
Be patient and good luck, and be sure to check the curl man page.
curl -u username:password -X POST --data "name1=value1&name2=value2" http://yourwebpage.com/
Just been tinkering with Sinatra and trying to get a bit of a restful web service going.
The error I'm getting at the moment is very specific though.
Take this example post method
post '/postMan/:someParam' do
#Edited here. This code can be anything. 411 is still the response
puts params[:someParam]
end
Seems simple enough. Take a param, make an object out of it, then go store it in whatever way the objects save method defines.
Heres what I use to post the data using Curl
$curl -I -X POST http://127.0.0.1/postman/123456
The only problem is, I'm getting 411 back and have no idea why.
To the best of my knowledge, 411 is length required. Here is the trace
HTTP/1.1 411 Length Required
Content-Type: text/html; charset=ISO-8859-1
Server: WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09)
Date: Fri, 02 Mar 2012 22:27:09 GMT
Content-Length: 303
Connection: close
I 'Cannot' change the curl message in any way. So might anyone have a way to set the content length to be ignored in sinatra? Or some fix which doesn't involve changing the curl request?
For the record, it doesn't matter whether I use the parameters in the Post method or not. I could have some crazy code inside it, it will still throw the same error
As others said above, WEBrick wrongly requires POST requests to have a Content-Length header. I just pass an empty body, because it's less typing than passing in the header:
curl -X POST -d '' http://webrickwhyyounotakeemptyposts.com/
Are you sure you're on port 80 for your app?
When I run:
ruby -r sinatra -e "post('/postMan/:someParam'){puts params[:someParam]}"
and curl it:
curl -I -X POST http://127.0.0.1:4567/postMan/123456
HTTP/1.1 200 OK
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: thin 1.3.1 codename Triple Espresso
it's ok. Had to change the URL to postManthough, your example threw a 404because you had postman.
The output was also as expected:
== Sinatra/1.3.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
123456
Ah. Try it without -I. It's probably sending a HEAD request and as such, not sending what you expect. Use -v if you want to show the headers.
curl -v -X POST http://127.0.0.1/postman/123456
curl -v -X POST -d "key=val" http://127.0.0.1/postman/123456
WEBrick erroneously requires POST requests to include the Content-Length header.
curl -H 'Content-Length: 0' -X POST http://example.com
Standardly, however, POST requests don't require a body and therefore don't require the Content-Length header.