I'm needing to verify an HTTP HMAC signature for a program I use (Drone CI, trying to create an extension), but nothing I'm trying is getting the results to match.
Specifically, the HTTP request looks like this:
POST / HTTP/1.1
Host: 127.0.0.1
User-Agent: Go-http-client/1.1
Content-Length: 50
Accept: application/vnd.drone.validate.v1+json
Accept-Encoding: identity
Content-Type: application/json
Date: Wed, 09 Jun 2021 01:25:07 GMT
Digest: SHA-256=digest
Signature: keyId="hmac-key",algorithm="hmac-sha256",signature="signature",headers="accept accept-encoding content-type date digest"
{"data":"data"}
I've tried messing around with sha256sum, hmac256, and openssl, swapping the values of Digest and signature, but none of them are making anything match.
Update:
I've tried the following code, but it still doesn't seem to be working:
MESSAGE='{"data":"data"}'
SECRET="secret" # This isn't any value in the request, is it?
echo -n "${MESSAGE}" | openssl dgst -sha256 -hmac "${SECRET}" | base64 -w 0
Update 2:
The HMAC examples on Wikipedia are working just fine for me. Is there something I might be messing up from the HTTP request?
If it's worth anything, the request signing is apparently based on draft-cavage-http-signatures-10.
Update 3:
I've attempted to create the signature with the following format, still no dice though:
{"data":"data"}
accept: application/vnd.drone.validate.v1+json
accept-encoding: identity
content-type: application/json
date: Wed, 09 Jun 2021 01:25:07 GMT
digest: SHA-256=digest
Assuming the above text is stored under the variable ${hmac_data}, the following was used to attempt (but failed) to reach the value of signature:
echo -n "${hmac_data}" | openssl dgst -sha256 -hmac "${key}" | awk '{print $2}' | base64 -w 0
Update 4:
After a crud ton of messing around, Kiskae's answer got me to a solution.
In addition to what he said, I found that Drone base64 encodes the binary version of the string, rather than the ASCII one.
So, the new version of the above command (when using with Drone CI) would be the following:
${MESSAGE} is equal to the following:
accept: application/vnd.drone.validate.v1+json
accept-encoding: identity
content-type: application/json
date: Wed, 09 Jun 2021 01:25:07 GMT
digest: SHA-256=digest
And the command:
echo -n "${MESSAGE}" | openssl dgst -sha256 -hmac "${SECRET}" -binary | base64 -w 0
They appear to be using an implementation of the http signatures draft.
The linked document explains the way the signature needs to be calculated and how to verify it. But this is probably why your example doesn't work:
2.1.3. headers
OPTIONAL. The headers parameter is used to specify the list of
HTTP headers included when generating the signature for the message.
Basically the signature doesn't include just the message, probably to prevent replay attacks. Since you just hash the message it is working as intended.
Related
I've been trying to update the AMP cached pages on my website for a couple of days now to no avail.
While the documentation for updating the cache exists, it was probably written by a Google engineer, and as a result, isn't the easiest read.
https://developers.google.com/amp/cache/update-cache
I've followed the directions to the best of my ability.
I've created a private-key and public-key. Created a signature.bin and verified it using the procedure in Google's own documentation.
~$ openssl dgst -sha256 -signature signature.bin -verify
public-key.pem url.txt
Verified OK
The public-key.pem has been renamed to apikey.pub and uploaded to the following directory:
https://irecover.ca/.well-known/amphtml/apikey.pub
To validate that there has been no issue in the copying, I checked the signature using the following:
$ openssl dgst -sha256 -signature signature.bin -verify <(curl https://irecover.ca/.well-known/amphtml/apikey.pub) url.txt
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left Speed
100 450 100 450 0 0 2653 0 --:--:-- --:--:--
--:--:-- 2662
Verified OK
Now I convert the signature file to base64 and replace the / with _ and the + with -
cat signature.bin | base64 > signature.b64
sed 's///_/g' signature.b64 > signature.b64a
sed 's/+/-/g' signature.b64a > signature.b64b
sed 's/=//g' signature.b64b > signature.b64c
cat signature.b64c | tr -d '\n' > signature.b64
I have made a script that makes the update-cache url for me. It also creates a timestamp right that moment and uses it for the amp_ts variable (So the amp_ts is never out by more than 1 second). I then append that to the end of the query which is about to be cURL'd by the script I have made, so it looks like so:
https://irecover-ca.cdn.ampproject.org/update-cache/c/s/irecover.ca/article?amp_action=flush&_ts=1581446499&_url_signature=KDaKbX0AbVbllwkTpDMFPOsFCRNw2sbk6Vd552bbG3u5QrecEmQ1SoMzmMR7iSXinO7LfM2bRCgJ1aD4y2cCayzrQuICrGz6b_PH7gKpo6tqETz06WmVeiP89xh_pBOu-pyN5rRHf0Pbu8oRkD2lRqgnGrLXDfIrFTTMRmHlO0bsa8GknyXL8RNXxk9ZQaufXAz-UJpoKaZBvT6hJWREAzxoZ-rGnDPVaC3nlBCu3yPorFcTbbr0CBz2svbfGgAYLQl54lLQmUpxI8661AEe1rdOLqAyLIUb4ZiSbO65-PmIkdZWVPFHMdbpSv4GMNdvodleCWBfMAcG2C09v-LR6g
However, this always results in the same error code from google.
Invalid public key due to ingestion error: 404 or 410 error from origin
Does anyone have any idea what I'm doing wrong?
A couple of things to check for apikey.pub accessibility:
The /.well-known/amphtml/apikey.pub file is accessible to both mobile and desktop user agents (e.g no redirect for non-mobile as the AMP cache client may redirect)
The public key is not excluded in robots.txt e.g:
User-agent: *
Allow: /.well-known/amphtml/apikey.pub
The public key response has the expected headers (e.g content-type: text/plain):
curl -I https://amp.example.com/.well-known/amphtml/apikey.pub
HTTP/2 200
date: Sun, 26 Jul 2020 23:48:55 GMT
content-type: text/plain
vary: Accept-Encoding
etag: W/"1c3-173478a8840"
last-modified: Sun, 26 Jul 2020 23:48:55 GMT
With those things in place, I get an "OK" success response from the update/cache endpoint
I think this is an Apache bug. I found this:
https://grokbase.com/t/apache/bugs/108ht7t086/do-not-reply-bug-49767-new-no-last-chunk-0-after-post-request-on-cgi-script
https://bz.apache.org/bugzilla/show_bug.cgi?id=49766
https://bz.apache.org/bugzilla/show_bug.cgi?id=49767
I sent a http POST request to this CGI bash script:
#!/bin/bash
echo -e -n "Content-type: text/plain\n\n"
echo hello world!
What could be more innocent? I was shocked to see that Apache answers with a chunked response! And that's not all: The response is even faulty! The last chunk of zero length is missing. There is only one chunk.
The returned headers are these:
HTTP/1.1 200 OK
Date: Sun, 13 Oct 2019 12:19:23 GMT
Server: Apache
Upgrade: h2,h2c
Connection: Upgrade, Keep-Alive
Keep-Alive: timeout=5, max=100
Transfer-Encoding: chunked
Content-Type: text/plain
Has this bug still not been corrected? According to the bug reports above the "solution" (workaround) is to read all the posted input before outputing anything. When I did this:
#!/bin/bash
echo -e -n "Content-type: text/plain\n\n"
input=$(cat)
echo hello world!
the final chunk of zero length is present ;). Is this somehow intended behaviour or is it considered a bug? Why don't they fix it? Do I have to read ALL the input to avoid the error or? Anyone here with more info about this issue?
I've got the following link, which is downloading a CSV file when put through a web browser.
http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre=
However, when using Wget with Cygwin, with the command below, Wget retrieves a file, which is not a CSV file, but a file without extension. The file is empty, that is, has no data at all.
wget 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
So as I hate to be stuck, I tried the following as well. I put the URL in a text file and used Wget with the file option:
inside fic.txt
'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
I used Wget in the following way:
wget -i fic.txt
I got the following errors:
Scheme missing
No URLs found in toto.txt
I think I can suggest some other options that will make your underlying problem more clear which is that it's supposed to be html, but there is no content (content-length = 0).
More concretely, this
wget -S -O export_classement.html 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
produces this
Resolving pro.allocine.fr... 62.39.143.50
Connecting to pro.allocine.fr|62.39.143.50|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 28 Mar 2014 09:54:44 GMT
Content-Type: text/html; Charset=iso-8859-1
Connection: close
X-ServerName: WEBNX2
akamainocache: no-store
Content-Length: 0
Cache-control: private
X-KompressorName: kompressor7
Length: 0 [text/html]
2014-03-28 05:54:52 (0.00 B/s) - ‘export_classement.html’ saved [0/0]
Additionally the server is tailoring it's output based on how the browser identifies itself. using wget does have an option to include an arbitrary user-agent in the headers. Here's an example what happens when you make wget identify itself as Chrome. Here's a list of other possibiities.
wget -S --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36" 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
Now the output changes to export.csv, with type "application/octet-stream" instead of "text/html"
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 28 Mar 2014 10:34:09 GMT
Content-Type: application/octet-stream; Charset=iso-8859-1
Transfer-Encoding: chunked
Connection: close
X-ServerName: WEBNX2
Edge-Control: no-store
Last-Modified: Fri, 28 Mar 2014 10:34:17 GMT
Content-Disposition: attachment; filename=export.csv
Just been tinkering with Sinatra and trying to get a bit of a restful web service going.
The error I'm getting at the moment is very specific though.
Take this example post method
post '/postMan/:someParam' do
#Edited here. This code can be anything. 411 is still the response
puts params[:someParam]
end
Seems simple enough. Take a param, make an object out of it, then go store it in whatever way the objects save method defines.
Heres what I use to post the data using Curl
$curl -I -X POST http://127.0.0.1/postman/123456
The only problem is, I'm getting 411 back and have no idea why.
To the best of my knowledge, 411 is length required. Here is the trace
HTTP/1.1 411 Length Required
Content-Type: text/html; charset=ISO-8859-1
Server: WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09)
Date: Fri, 02 Mar 2012 22:27:09 GMT
Content-Length: 303
Connection: close
I 'Cannot' change the curl message in any way. So might anyone have a way to set the content length to be ignored in sinatra? Or some fix which doesn't involve changing the curl request?
For the record, it doesn't matter whether I use the parameters in the Post method or not. I could have some crazy code inside it, it will still throw the same error
As others said above, WEBrick wrongly requires POST requests to have a Content-Length header. I just pass an empty body, because it's less typing than passing in the header:
curl -X POST -d '' http://webrickwhyyounotakeemptyposts.com/
Are you sure you're on port 80 for your app?
When I run:
ruby -r sinatra -e "post('/postMan/:someParam'){puts params[:someParam]}"
and curl it:
curl -I -X POST http://127.0.0.1:4567/postMan/123456
HTTP/1.1 200 OK
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: thin 1.3.1 codename Triple Espresso
it's ok. Had to change the URL to postManthough, your example threw a 404because you had postman.
The output was also as expected:
== Sinatra/1.3.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
123456
Ah. Try it without -I. It's probably sending a HEAD request and as such, not sending what you expect. Use -v if you want to show the headers.
curl -v -X POST http://127.0.0.1/postman/123456
curl -v -X POST -d "key=val" http://127.0.0.1/postman/123456
WEBrick erroneously requires POST requests to include the Content-Length header.
curl -H 'Content-Length: 0' -X POST http://example.com
Standardly, however, POST requests don't require a body and therefore don't require the Content-Length header.
I wrote a bash script that gets output from a website using curl and does a bunch of string manipulation on the html output. The problem is when I run it against a site that is returning its output gzipped. Going to the site in a browser works fine.
When I run curl by hand, I get gzipped output:
$ curl "http://example.com"
Here's the header from that particular site:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=utf-8
X-Powered-By: PHP/5.2.17
Last-Modified: Sat, 03 Dec 2011 00:07:57 GMT
ETag: "6c38e1154f32dbd9ba211db8ad189b27"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Content-Encoding: gzip
Content-Length: 7796
Date: Sat, 03 Dec 2011 00:46:22 GMT
X-Varnish: 1509870407 1509810501
Age: 504
Via: 1.1 varnish
Connection: keep-alive
X-Cache-Svr: p2137050.pubip.peer1.net
X-Cache: HIT
X-Cache-Hits: 425
I know the returned data is gzipped, because this returns html, as expected:
$ curl "http://example.com" | gunzip
I don't want to pipe the output through gunzip, because the script works as-is on other sites, and piping through gzip would break that functionality.
What I've tried
changing the user-agent (I tried the same string my browser sends, "Mozilla/4.0", etc)
man curl
google search
searching stackoverflow
Everything came up empty
Any ideas?
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
gzip is most likely supported, but you can check this by running curl -V and looking for libz somewhere in the "Features" line:
$ curl -V
...
Protocols: ...
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Note that it's really the website in question that is at fault here. If curl did not pass an Accept-Encoding: gzip request header, the server should not have sent a compressed response.
In the relevant bug report Raw compressed output when not using --compressed but server returns gzip data #2836 the developers says:
The server shouldn't send content-encoding: gzip without the client having signaled that it is acceptable.
Besides, when you don't use --compressed with curl, you tell the command line tool you rather store the exact stream (compressed or not). I don't see a curl bug here...
So if the server could be sending gzipped content, use --compressed to let curl decompress it automatically.